You are browsing the archive for Biohacking.

House Alarm with a Pi: The Smart Door

November 14, 2019 in Biohacking

Following on from my previous post, I decided that for my next Smarthome project it would be a good idea to build a sensor for the front door, that way I would be able to detect whether the door is open or closed.

Now I could have brought a full alarm system but I find it more enjoyable to build my own, I even get to learn something new in the process.

Setting the scene

It was a cold dark evening in late December 2018, it was warm inside and I was slouching on the sofa watching the light dancing around on the living room floor, shining through the slats in the window blind I could just make out the red and blue christmas lights from next door as they gently swayed in the wind…. If only! What really happened was definitely not as interesting.

It was a cold dark evening in late December 2018, it was warm inside and I was slouching on the sofa watching the light dancing around on the living room floor, shining through the slats in the window blind I could just make out the red and blue christmas lights from next door as they gently swayed in the wind…. If only! What really happened was definitely not as interesting.

And so I now present my next post in the series, I should mention here that everything is my own work and also my own opinions, if you don’t like it stop reading. On the other hand though if you have ways I could improve then let me know.

The Smart door

I have always wanted a house alarm for when we are out but due to not owning my own home, getting an alarm fitted professionally has never been a viable option, though I still wanted to be notified if the house was broken into when I’m out or on holiday. Thus I only really had two options buy a wireless system or build a system and as I normally do I decided that it would be more fun to build, so I set about researching alarm systems, how they work and design considerations. After a heavy research session that left my head fuzzy and spinning, I finally had enough information to start planning how the design would work in practice.

Alarm Sensors

The basic door and window sensors use a reed switch and act just like a light switch, except these switches use magnets to control whether they are on or off rather than using a human finger. Two types exist on the market “normally open” (NO) and “normally closed” (NC), now “normally closed” sensors allow current to travel when the magnet is within a centimeter of them (just like turning on a light switch) and if you then move the magnet more than a couple of centimetres away they open (turn off the light), on the other hand “normally open” sensors work in the opposite way.

For my system I decided that a normally closed sensor would be the better choice, with the main reason being that the circuit is considered complete and will report an “on” state when the door is closed, then when the door is later opened the circuit will be broken and report an “off” state, additionally to this if the wires are cut it also acts like the door has been opened, due to the circuit being broken.

Now that sounds like a “two birds with one stone” problem to me, except…. As I’m sure anyone with an electrical background would be able to see, this doesn’t take into account that the wires could be striped and joined together to create a new unbroken circuit which is unable to report any other state, being bypassed like this means the door would never be reported as opened and we would have effectively disabled the door sensor.

Good for criminals but not so good for us.

The common approach seems to be that the circuit is monitored for a particular voltage by the alarm system, to help with this a resistor is placed either “next to” or “in line with” the door sensor and then when the door sensor is bypassed the alarm system sees the difference due to the resistor being cut out of the loop setting off the alarm.

Which would be really helpful if I had brought an alarm system.

Circuit design

I’m not an electrical engineer and neither am I a circuit designer, so what happened next was lots and lots of trial and error, over the course of about 3 months I designed at least 9 circuits with lots of failures and tested 4 circuits on a breadboard, all of them had issues or problems preventing them from working properly. Most of the issues were related to correctly detecting the voltage difference both with and without the resistor being part of the door sensor and as I’m sure you’ll see my electronics knowledge leaves a lot to be desired.

The goal (I had chosen to accept) was simple I had to be able to detect when the circuit was open and closed (this was a door sensor after all), and as previously mentioned the added bonus of using a door sensor was that the open circuit would also trigger when the door was open. I also had to be able to detect when the sensor had been bypassed by placing a resistor in line with the sensor which could be used to detect the voltage difference and this is where my problems started.

Also I didn’t have a proper door alarm sensor to hand, so I improvised and pulled apart an old “wirefree door alarm” I had from when the children were little. I tore it apart keeping just the reed switch and the magnet, then I fixed the magnet to the door and the reed switch to the frame with blu tac, after attaching the wires I ended up with something that looked like this.

Yes I know, blu tac! But it works and it’s only for testing, I will replace this with something better once I get a proper door sensor.

Design 1

The first design was simple and with very few components it would have been cheap to produce in large numbers it also worked perfectly in the circuit simulator, but unfortunately that’s where the benefits end. The design suffered from random radio and people interference (possibly mobile phone related) and was too temperamental to be of any actual use, it had numerous false positives and the fault pin never triggered in any real world testing.

So with my broken circuit, lack of electrical knowledge and a slightly better understanding of what not to do I set to work on the next design.

Design 2

Still not really sure what I was doing I started work on circuit 2. Now design number 2 was another failed attempt to use resistors but after further research I decided that with this attempt I wanted a large voltage difference between the “on” (2V or more) and the “off” (under 1V) states. After testing in two different circuit simulators I decided that this also wouldn’t work as I expected in the real world and promptly dropped this attempt.

Designs 3, 4 and 5

With a successful design in the circuit simulators I wired these attempts up on a breadboard and tested them whilst connected to the door sensor. Unfortunately these also suffered from interference and never proved stable enough that they would be of any use.

By this point in time I had started researching voltage dividers and op amps to work around this problem, I had also found something called a zener diode.

Note: A zener diode acts a bit like digital children that push against each other with the strongest winning except in the real world the children are replaced with electric current that changes direction once the voltage goes above a set value.

The problem with this approach was that I didn’t have a zener diode and didn’t really want to spend money on something that I might give up on, so I persevered reading and designing more circuits as I went.

Further research led me to believe that I could be picking up radio interference on the wire between the sensor and the Pi due to it being made from thin strands, so to combat this I cut the ends off a 5 meter network (cat5) cable and wired it up to the door sensor.

Once I am happy the sensor works as expected I plan on replacing this cable with proper alarm cable.

Design 6, 7, 8

These designs never made it past the circuit simulator as I was never able to get the voltage sent to the GPIO pin to be less than 1 volt in an “off” state and more than 2 volts during an “on” state. After more failed circuits I decided that the only way to get this working as desired was to use transistors (after all they are used in computers with great success). I had previously added transistors to some of the circuits but without truly appreciating their abilities I mistakenly dismissed them as not being useful to my goals, which brings me onto the last circuit.

Design 9

Design number 9 was a bit of an “ah ha” moment, as the flow of electrons and how they reacted in the presence of resistors and transistors finally started to make some kind of sense, lighting up the cobwebs in my head. I tested again in two separate circuit simulators before attempting to build this on a breadboard, once built I also checked for the voltage difference at both the door state and fault pins using a voltmeter.

Happy that things looked good in the real world, I set about connecting the breadboard circuit to both the reed switch on the door and the Pi that had previously been setup for my smart doorbell project.

Now that I had the door sensor wired in, I fired up my laptop and used SSH to connect to the Raspberry Pi, once the connection had been established I checked that I could read both GPIO pins using the command line as follows.

Note: If your not used to using the Linux terminal and bash, the { } brackets in the below command might seem confusing. Put simply they force bash to expand the arguments once for each number, just remember to add commas to separate them.

$ echo "27" > /sys/class/gpio/export
$ echo "22" > /sys/class/gpio/export

$ echo "in" > /sys/class/gpio/gpio27/direction
$ echo "in" > /sys/class/gpio/gpio22/direction

$ cat /sys/class/gpio/gpio{27,22}/value

The commands listed above enable the pins so that they can be accessed as if they were a standard file (making terminal access eaiser), next we set the direction for both pins to “in” as we want to read their state and then we perform the actual read of the current pin state using the cat command.

Having verified that the door pin correctly reads a “1” when the door was closed and “0” when open, I set about testing the fault pin. The fault pin is actually supposed to read “0” when the resistor is in place and a “1” when the wires have been shorted (resistor removed), I was therefore very pleased that testing also confirmed this.

With the initial testing completed I setup the following quick and dirty monitor to watch the pin state over a few days, with the intention of detecting false triggers. To make this easier I added a timestamp to the loop, this caused the time to be sent to the log file along with the pin number and of course, I included the pin state also.

#!/bin/bash

echo “27” > /sys/class/gpio/export
echoin” > /sys/class/gpio/gpio27/direction

echo “22” > /sys/class/gpio/export
echoin” > /sys/class/gpio/gpio22/direction

while [ 1 ]; do
    date;
    echo -n "pin 27: ";
    cat /sys/class/gpio/gpio27/value;
    echo -n "pin 22: "
    cat /sys/class/gpio/gpio22/value;
done > ./logfile.txt
I saved the script as

./gpio-check

and made the script active before starting it in the background with:

#make script executable
$ chmod +x ./gpio-check

#and run in background
$ sudo nohup ./gpio-check &

After a few days I logged back into the Pi to check the times of the triggered pins, first I stopped the script with

$ sudo kill $(pidof sudo)

Then using grep I filtered down the results to a single pin number at a time by adding “-C1” to the grep below command allowed me to view the line above and the line below the matching pin number, this way I could see the date/time that the pin changed as well as the state of both pins, using the following few lines I checked that the fault pin didn’t false trigger and also that the door never opened when everyone was in bed asleep.

#default pin should always show "0", we search for "1" to make sure it didn’t happen
$ grep -C1 "pin 22: 1" logfile.txt

#sensor pin should show "1" when the door is closed and "0" when open, 
# so we search for "0" as the door should never open when the house is asleep
$ grep -C1 "pin 27: 0" logfile.txt

With the logs looking clean of phantom triggers I copied my doorbell script that I made in the previous blog post editing the URL and pin number so that it would trigger when the door was opened.

After changing the script to look like the below

#!/usr/bin/python3

import requests
import RPi.GPIO as GPIO

input_pin = 27
url = 'http://example.com/api/frontdoor'

GPIO.setmode(GPIO.BCM)
GPIO.setup(input_pin, GPIO.IN, pull_up_down = GPIO.PUD_DOWN)

def button_press(channel):
    state = GPIO.input(channel)
    if state == 1:
        req = requests.post(url, data = "on", headers = {"Content-type": "text/plain", "Accept-Encoding":""})
    return

GPIO.add_event_detect(input_pin, GPIO.BOTH, callback = button_press, bouncetime = 1)

try:
    while True:
        pass
finally:
    GPIO.cleanup()

I saved it as ./frontdor.py, made it executable and copied it to the /usr/local/bin/ folder

$ chmod +x ./frontdoor.py
$ sudo cp ./frontdoor.py /usr/local/bin/

And finally I created a new service file called “frontdoor.service” like this

[Unit]
Description=frontdoor service
After=network.target

[Service]
Type=simple
User=root
ExecStart=/usr/local/bin/frontdoor.py
Restart=always

[Install]
WantedBy=multi-user.target

then moved the file to the correct location and started the service like so

$ sudo mv ./frontdoor.service /etc/systemd/system/
$ sudo systemctl enable --now frontdoor.service

If you have been following along you will notice that have I also removed the sleep delay from the doorbell script, this was to try and improve the delays between the event in the real world and the trigger firing in the Pi’s world.

After a few weeks of testing I finally decided that the pin checks still weren’t quick enough for me and I also wasn’t happy with having to create a separate script and service every time I wanted to use another pin, as it would make maintenance awkward and scalability horrible.

And this will bring me nicely onto the topic of my next post where I go in search of something faster. As always until next time, happy tinkering!

Source link

The Story of Yet another Asian ICO Scam

November 14, 2019 in Biohacking

The Story of Bitsdaq

In December 2018 a new Chinese exchange popped out of nowhere, they called it Bitsdaq.To put short Bitsdaq cloned the existing code to utilize the infrastructure of Bittrex. Bittrex is one of the biggest custodians for Bitcoin in the world holding over 1 Billion USD worth of Bitcoin.

As such Bittrex introduced one of the best security protections of any exchange. It was a smart move for Bitsdaq to partner up and clone to tech as it reduced their work effort to basically nothing.

Soon after Bitsdaq started to report an impressive user growth. Within just 4 months they reported user login and KYC numbers that can compete with those of binance. Quickly people started to talk about this upcoming Chinese exchange and a potential IEO. Fast forward to March 2019, and IEOs started to pop up left and right the recent success of the Binance launchpad got everybody excited. It was to be expected that soon after Pre-sale pools for a new upcoming exchange were popping up left and right.

“Bitsdaq will do an IEO on their own Exchange, that has millions of users!” is what was pushed out on all the underground discord channels.

We did not give into the hype and just ignored it as it seemed to be yet another Asian scam. However, the hype grew more and more until their IEO so did its userbases. It became quite interesting so we started to do Due Diligence. Real Due Diligence

Bitsdaq’s traffic and user-base figures are nothing more than a contrived product of shockingly over-valued reward campaigns.
Bitsdaq has given away millions of dollars to drive mere account-creation, not legitimate user base growth. These expenses have come in the form of BQQQ token distributions, thereby displaying the exchange’s lack of regard for its own utility token’s true value.

The heavy issuance of reward tokens will inevitably dilute the circulating supply without adding any real value to the exchange or its real user base.

In order to deceive investors and partners into believing the exchange has incredible demand, Bitsdaq has launched campaigns that pay incredible sums in the form of airdrops or candy campaigns. Such campaigns incentivize people to sign-up and pass KYC in order to claim their rewards.

For an exchange that supposedly has 2 million accounts Bitsdaq’s volume is incredibly low; this evidences the lack of value of Bitsdaq’s purchased user accounts.

Airdrops used to be a popular way to attract attention in the cryptocurrency space. However, nowadays, a generic airdrop brings interest from primarily airdrop hunters. However, Bitsdaq’s key reason for popularity has been a mass influx of airdrop hunters hoping to cash-in on the BQQQ rewards.

In fact, the top Google result for Bitsdaq is its airdrop. Two of the three recommended videos are also about BQQQ give-aways.
YouTube is also flooded with videos covering Bistdaq’s airdrop and candy campaign.

Much of the exposure Bitsdaq is receiving due to the rewards it is offering to airdrop hunters. Even much of the content being created about the exchange is primarily targeting airdrop hunters. Such exposure adds little value and only offers vanity figures to appeal to investors.

While website traffic and fake sign-ups can be bought on various black-hat platforms, verified accounts are difficult to acquire in large sums as it can be difficult to acquire enough KYC documents. Thus, Bitsdaq has flooded airdrop channels with the promise of free BQQQ tokens. While such a tactic draws in traffic, the traffic largely low-quality and the accounts fail to materialize into genuine trade volume. This is evidenced by Bitsdaq’s largely dry order book.

While exchanges do offer rewards to new users, there’s typically a requirement of minimum trade volume or deposits that must be fulfilled. Rewards on the sheer basis of sign-ups do not screen for users who participate for the sole purpose of free tokens. This is evidenced by Bitsdaq.com’s steep traffic increase and decline.

The fraction of traffic, compared to peak volume, that has remained is due to the exchange’s offer to reward users who log-in everyday with additional tokens.

Bitsdaq’s campaigns try to murk more than just the platform’s verified users and traffic. The platform’s candy campaign rewards users for daily log-ins. The reward is earned merely by logging in. Such a campaign could be justifiable if users had to log in and participate in trades in order to bring liquidity to the orderbook; as this is not the case, rewards for merely logging in achieve nothing but an inflated and ingenuine user retention rate.

Moreover, such a tactic also murks external analysis. It implies high pageviews per each session and low bounce rate, but the true cause of these positive figures is simply that airdrop hunters are logging in to grow their reward and then simply log out. The action involves just enough pageviews to feign positive user activity statistics. However, the heavy spreads on trading pairs display the reality of Bitsdaq’s illiquid order books and lack of user retention and engagement.

Given that the general cryptocurrency market perceived Bitsdaq’s paid-for (fake) metrics as an accurate representation of the exchange’s popularity, Bitsdaq has now also launched an airdrop campaign for the newly launched application.

The maximum achievable airdrop for downloading and positively reviewing the exchange’s application is approximately $7 in tokens. While purchasing fake verified exchange accounts can be quite difficult without running a mass rewards campaign to airdrop hunters, application reviews are easy to buy for a dollar or less.

Given that Bitsdaq’s application launch airdrop rewards, again, for no real exchange activity, there is no difference in the review that will be received for a $7 payment in tokens and the review that could have been received for less than $1 in fiat.

Fake reviews could have been acquired at a much lower cost than the payment of $7 worth of tokens per review but Bitsdaq’s decision to offer a much higher payment in tokens for fake reviews shows the true value the exchange associates to its tokens. The exact numbers that Bitsdaq is potentially giving away in tokens are unclear, but it can be assumed that most to all signups that happened is due to the airdrop.

Assuming that 80% of all signups are due to the airdrop then 7 * 2.500.000 = 17.5 Million USD plus another 12 Million for continues login requests in bitsdaq website.

More than 30 Million USD worth of BQQQ tokens are being promised to airdrop hunters, tokens that either will never be paid out to them or will completely destroying the token value for investors.

The approximate value USD value in BQQQ Tokens that was promised to bounty hunters and airdrop people can be anything between.

Looking at all the facts it should become clear that Bitsdaq created false demand for their own IEO by promising millions to bounty hunters, to inflate the numbers. Those inflated numbers were then used to present BQQQ to influencers and Poolers to collect money in the private sale funds. Using the funds that Bitsdaq received in the private sale they then continued to pay a lot of “independent influencers” to review Bitsdaq to create real demand for the IEO.

As airdrop users will only receive their tokens much later there is a good chance that the fake demand that Bitsdaq created by lying to everybody might actually be enough to pump the token price initially after the IEO. Pools and private sale investors believed that there is a real user base that will buy their bags after the IEO, a userbase that never existed.

They might actually receive tons of users that can sustain the token price and generate a well-working ecosystem in a genius yet very immoral marketing stunt. Due to the low hard cap of BQQQ’s initial exchange offering, the token may appreciate in value at the time of listing. However, the valueless payments of reward tokens that have been given out will eventually dilute circulating supply. Meaning maybe everything will go well for BQQQ investors and Bitsdaq.

The worst aspect of Bitsdaq’s rewards campaign is not that they led to a dilution of supply with little added value to the exchange, but that they displayed an inaccurate picture of the exchange’s usage and popularity to the cryptocurrency community.

While a vast portion of the audience has taken the façade of Bitsdaq’s metrics as accurate, the exchange’s illiquid orderbook is sufficient evidence to erode the claim of two million users. Bitsdaq has two million sign-ups but the number of users is excruciatingly lower.

The only thing Bitsdaq’s misleading marketing campaign has achieved is the clear display of the exchange’s lack of regard and value for its native token.

FUN FACT (This part was added 13/11/2019)

As DAOMAKER’s we are helping new projects. One project that was planning to do IEO on Bitstaq reach out to us and asked us about our opinion on Bitstaq. When we presented the Bitstaq story, aggressive marketing, and fake numbers decided that this project canceled the Bitstaq IEO.

Written by:

Source link

The Banks Have Gone Mad and the System is Broken

November 14, 2019 in Biohacking

The rise of peer to peer crowd-lending, tokenization and the slow death of bank

There is clearly a combination of opportunistic reasons, greed, quest for power and cronyism as to why the bullshit economics behind the trickle down narrative — despite its historical failure and well documented
criticism — was sold to the masses by liberal lobbies and special
interest groups, in bed with the politicians, during the last decades.
This causes today not only major economical and financial imbalances, asset price distortions and a dramatic increase in income inequality not seen since the roaring ´20s, but also — more dangerously — a social breakdown which Dalio describes as, literally, a sliding towards a social/civil
“war like environment”. Examples of civil unrest around the world are
plenty to see, from the French yellow-vests, to Chileans and Lebanese people just to name the latest. The recent bank runs in Lebanon are a déjà vu of the EU banking crises in Iceland, Greece and Cyprus in the last decade.

But plenty has already been written on that topic. I wish rather to focus
on how — from bottom up (and not vice versa) — the market economy finds new solutions and creates new opportunities for investors to finance the production of goods and services without the economic aberrations of
negative interest rates and outside the legacy banking system.

The time value of money and the rise of p2p lending

One of the capitalistic market fundaments is that the capital must be
rewarded. There must be an incentive to save and invest your savings in
the economy. This is achieved through market based interest rates that
are paid to someone to lend his money. The weirdness of zero/negative
rates — which is not by the way the result of market dynamics but rather
the arbitrary and fraudulent manipulation of central banks — has torn
apart the basic principle that “money has time value” (i.e. one unit of money today is worth more than one unit of money tomorrow).

But what is likely missed is that — while such financial follies continue
unabated — the real economy continues to work with the usual dynamics of a healthy capitalistic system where positive interest rates are
determined by market participants in order to lend money to productive
businesses.

So far, entrepreneurs without a good enough credit standing, start-ups,
SMEs or people not wealthy enough to post a collateral guarantee, had a
very hard time financing their businesses or consumptions with the
banks, although the financial system has been awash with the liquidity
injected by central bankers globally.

So, when banks stop doing what they are supposed to do (lend money) and people are not rewarded with adequate interest rates to keep their
savings with the banks, they take their money out of the banks and start
lending it to other people and businesses which cannot be otherwise
financed by the legacy banking system. Call this alternative non-bank lending or peer to peer lending or crowdlending, whatever pleases you.

Nowadays fintech is enabling the growth of a whole new breed of peer to peer lending platforms where lenders and businesses meet and determine market based interest rates to reward capital deployment.

Some figures and data will give you a much better idea of what is happening unnoticed by many.

A US$ trillion market

Now let´s look at what the peer to peer lending sector is doing instead.

According to Statista.com it was worth US$ 9 billion at its very inception in 2014, the next year was already worth US$ 64 billion and it is projected to grow to a US$ 1 trillion industry by 2025. Now, that´s not peanuts.
This 2019 PWC research estimates the current worth of the peer to peer US lending industry at CHF 38,7 billion and the Chinese market ten times that with CHF 345 billion.

The crowdlending market dwarfs the equity crowdfunding and its growth will very likely exceed the forecasted US$ 1 trillion mark.

Source: PWC.ch

Explosive growth

The following chart is worth more than words. Mintos is a Latvian Fintech company which was launched in 2015 to become a “global marketplace for peer to peer loans”.
Through their platform small investors can lend money to almost any
sector of the real economy. You name it, agriculture, forestry, car
loans, consumer loans, invoice financing, real estate mortgage loans
etc.

Source: Mintos.com

According to their data small investors had lent through their platform almost € 1 billion up to August 2018. Then, in the following 15 months — thanks to the new wave of zero/negative interest rates — the investments have more than tripled to € 3.8 billion. And this is not the only one. The
likes of crowdestor or ablrate or investly just to name a few, they all have their own market focus and all of them claim double or triple digit growth.

More importantly, looking at the published loan performances, I was surprised to see that the disclosed default risk was in average quite low between 0,5% and 1,3% of the transacted volume, while the loan performance was quite high with an average interest rate return for lenders of approx. 11,9%**.

Source: Mintos.com

This is the peoples’ response to the bankers’ fraudulent manipulations of
interest rates, just like bitcoin is the response to the continued
debasement of fiat currencies.

While the banks are slowly – but inexorably – being deprived of their
traditional social role as lenders, the people are taking control of
their savings back in their hands. And before we know it those
platforms will accept cryptocurrencies as it is already happening. They will operate through channels parallel to the legacy banking system, therefore reducing transaction costs and the transfer time of funds by using stablecoins and cryptocurrencies instead of fiat
. Revolut — a UK based fintech bank — is now opening crypto accounts next to traditional fiat accounts and also accepts crypto funds.

The Tokenization of p2p Loan Portfolios

The next step will be to tokenize entire loan portfolios, so that a global,
frictionless, secondary market for such loans will be born.

The tokenization of such loan portfolios will bring a number of important advantages:

(i) the programmability of the loan terms within the token via smart contracts;

(ii) the semi-automatic and instantaneous settlement and execution of the loan terms such as interest payments and principal reimbursement, as well as the deduction for the platform fees (some functions cannot be fully automated since the borrower must always initiate payments);

(iii) the frictionless transfer of the token and the rights to the underlying loan in the secondary markets;

(iv) the possibility to trade the token in several secondary markets,
provided the token standard is compatible with such markets. This will
also enhance the liquidity which is now limited to the bid and offer of
that single platform, totally lacking interoperability. This also solves
the current problem that if the platform becomes insolvent the whole
secondary market (which is directly dependent on the platform) comes to a
halt. Even if legally speaking the lender´s rights to the underlying
loan are not impacted by an insolvency of the platform, without the
platform performing its key intermediary role there will always be
inevitable disruptions for the lenders;

(v) the self-custody of the private keys to the tokens in your personal wallet and therefore the reduction of the counterparty risk which will be then limited to the intrinsic risk of default of the borrower. Currently some platforms expressly claim that client accounts are kept segregated and that in case of insolvency of the platform such funds cannot be apprehended by the receivership;

(vi) a reduction of the paperwork currently needed and a reduction of the intermediary functions performed by the platform. Though some key functions, which cannot be automated, will still be performed by the platform such as:

– repossess collateral assets or enforce loan guarantees (personal or banking) or mortgages

– chase the borrower for late payments/default

– initiate debt recovery procedures in case of default.

Conclusions

For nobody, as far as I am aware, is tokenizing peer to peer loan
portfolios yet, I have asked Giuseppe Morlino — founder of the
tokenization start-up Stonize — why is it so: “The finance sector has lagged behind in terms of digitization and this is probably due to the high level of centralization that has characterised it, at least so far. The reason why tokenization is a game changer in finance is because it democratizes access to liquidity and investment opportunities. In other words, the true value of decentralization in finance is democratization: DeFi (Decentralised Finance) implies DeFi (Democratised Finance). And peer to peer lending is just one of the best possible use cases. We are carefully looking at that sector and we are already discussing with Loan Originators to do just that. Our Stonize T3 Protocol (Trusted Token Transfer) provides a seamless compliance solution enforcing the rules governing the digital security on issuance and secondary trades and it is agnostic. Currently, it is compatible with the Ethereum, Stellar and Algorand ecosystems and it is ready to leverage more advanced decentralised permissionless blockchains, as they will emerge. This is our approach to contribute to the rise of DeFi, the Democratised Finance in the future and peer to peer lending in particular.”

*********************************************************

** Read carefully my disclaimers below. Do your own due-diligence before
investing your money into any of the above mentioned platforms.
Importantly, should you decide to lend your money, spread your
investment across the highest possible number of loans to reduce the
adverse impact of any possible default. Read carefully the terms and
conditions for the loans and consider carefully the counterparty risks,
default risks and any other risks represented by the proponent of the
investment.

#blockchain #bianconiandrea #crypto #thinkblocktank #bitcoin #fintech #peertopeerlending #crowdfunding #crowdlending

***************************************************************

Legal Disclaimer:
The website and the information contained herein is for general
guidance only and it does not constitute legal advice. As such, it
should not be used as a substitute for consultation with lawyers on
specific issues. All information in this paper is provided “as is”, with
no guarantee of completeness, accuracy, timeliness or warranty of any
kind, express or implied.

Investment Disclaimer:
The website and the information contained herein is not intended to be a
source of advice or credit analysis with respect to the material
presented, and the information and/or documents contained in this
website do not constitute investment advice.

Source link

Why Crypto Predictions Are Irrational

November 14, 2019 in Biohacking

Crypto price predictions are the most beaten to death topic in the blockchain space, and that’s for a simple reason: Anyone can make a crypto prediction, because these are usually backed by nothing, and wild claims will get you clicks. As a result, everyone and their grandma has an opinion about the future price of Bitcoin, forgetting that Bitcoin is mostly sentiment-driven, and accurate prediction evades even crypto fund managers and mathematics PhD’s, let alone the #cryptobros.

If I filled this article with examples of predictions gone awry, then we’d be sitting here all day. Crypto predictions can be summed up with this idiom:

“If you throw enough shit against a wall, some of it has gotta stick.”

Traditional value-capture in business is done with an intermediary. For example, a bank might hold onto your money when you’re not using it and charge interest, or a social media platform might profit from your data, or a government might profit from your taxes. In essence, you have someone in the middle who has ownership of some or all of the value, which is useful if you want to make predictions.

You can make Facebook stock price predictions based on things like Facebook’s profit and loss statements, or their quarterly returns, because these are fundamentals that reflect the success of the company. With Bitcoin, there are no fundamentals. There’s no management, P&L statements, or quarterly returns.

Your best bet, without going deep into financial modeling, is asking is it in a bull run or not? Following the market is perhaps the most sure-fire way for predicting the short-term future. If it’s going up now, it’ll probably keep going up, at least for a bit, and if it’s going down now, it’ll probably keep going down, at least for a bit. Sure, you can get into the weeds of it, and build models that take into account things like news and Twitter sentiment, besides market price, but 99% of the public predictions aren’t based on said models.

The problem is that all these crypto prophets are making predictions months, or even years down the line, forgetting that no matter what happens, crypto prices aren’t based on forecast-able fundamentals.

Source link

Project-as-Code: The Catalyst DevOps Needs

November 14, 2019 in Biohacking

If you are familiar with Docker, Terraform, and the CI platforms (eg: Jenkins, CircleCI, Codeship), you already know the power of Declarative DevOps. It can make development easier by being repeatable, predictable, and fast. Supporting technologies both hide complexities and offer important reuse by supporting simple structured syntax in an easy to create and read file. Each technology has codified much of its domain allowing developers to author and instrument with nothing more than a text editor. Try something. Don’t like it? Make a quick change and try again. Need an Ubuntu image? Done. How about a MongoDB instance? You got it. Need to do something completely custom? Go for it.

Importantly, it simplifies the learning curve of using many traditional tools in the chain. Sure, it takes time to set up all the distinct and disparate pieces of DevOps, but once done, it typically hums along rather nicely and mostly uninterrupted.

If DevOps Is An Engine, Your Application Is Its Fuel 

Did you notice that many DevOps tools are directly or indirectly dependent on a Git Repository? A CI/CD pipeline is rather useless until a project is flowing through it. The fullest benefits of DevOps and automation are not realized without a project, yet the industry focus is/was to automate the engine without also addressing how to better source its fuel. Imagine your driving experience if you had to source the fuel for your automobile beyond the pump at the gas station.

Without the fuel, a well configured DevOps tool-chain only has the potential of automation. The fuel…the project is what causes the actual automation. Until the first meaningful commit, the full potential of a tool-chain is not completely understood. Although DevOps is all about automation, its setup and fuel source are still very manual.

I define a project as a “meaningful code base” resulting in a deployable application, along with all the scaffolding required to fully make all aspects of the DevOps toolchain work. Many IDEs and CI platforms generate a simple tech stack specific “Hello World” app with stubbed out/empty files to get the ball rolling.

Forget starting with a fully functioning app, or a ready to go Dockerfile, CI YAML file, or Terraform file. These platforms simply do not know enough about a project’s requirements to produce a meaningful project.

In order for DevOps to grow in adoption, the hurdle of getting to the place of automation needs to be lowered or removed entirely. The benefits of DevOps and CI/CD are easy to understand but the ROI is still a factor for many organizations. Some adopt aspects of DevOps while attempting to integrate their current development methodologies.

The DevOps tool-chain can be a rather large one, so which ones to choose and how to make them work cohesively is part of the learning curve.  Even though vendors like GitLab offer an all-in-one DevOps platform, it too is dependent on your project.

Quickly being able to stand-up an operational DevOps toolchain would benefit every project, especially for those on the fence concerning DevOps. By automating the provisioning of best-of-breed technologies, the power of DevOps from pipeline to orchestrated application container is more easily accomplished. In this way, the fuel source for the engine must more easily be sourced.

Automating the Automation With Project-as-Code

So let’s consider automating the set up of DevOps itself and its fuel source. There is Container-as-Code with Docker, Orchestration-as-Code with Kubernetes, Infrastructure-as-Code with Hashicorp’a Terraform, and Pipeline-as-Code with many CI platforms. So why not Project-as-Code?

If a project’s DevOps requirements can be described, then those requirements can be declared and codified. Just as an operating system can be declared in a Dockerfile, project declaration system should hide the messy details of the “how” in order to create the “what”. Just like other “as-code” implementations, it should be easy to use.

Simply stated, Project-as-Code would allow the following example declarative statement to be turned into a running CI pipeline with a functional application flowing through it:

“I have a business model I would like to apply to an Angular7 tech stack. I want to store data using MongoDB.  I would like the resulting project files to be committed to my GitHub repo, the source files to be built and tested using CircleCI, and the resulting application pushed as a Docker image to my container repository then finally deployed to a designated Kubernetes cluster on GCP.”

The above statement could read declaratively:

project:
    techstack:
       identifier:        Angular7Stack
    model:
       identifier:        Some business model file location
    options:
        application:
            name:         MyAngular7App
            description:  An Angular7 app using MongoDB
            version:      0.0.1
        cicd:
            platform:     circleci    
        git:            
            repository:   demoRepo
            tag:          latest
            host:         github.com
        docker:
            orgName:      myOrg
            repo:         demoDockerRepo
            tag:          v0.0.1
        terraform:
            provider:     google
        kubernetes:
            host:         https://xxx.xxx.xxx.xxx
            region:       us-central1-a
            hostTarget:   google
        artifact-repo:
            type:         jFrog
            repoUrl:      <ipaddress>:8081/repo/npm-public
        mongodb: 
            serverAddress: localhost:27017
            databaseName:  angularDemoDB
This is just as straightforward as we have come to expect from Declarative DevOps.  Now feed these declarations through a system that can generate and commit the code and required config files and instantiate the DevOps pipeline to build/test/containerize/deploy the resulting app. 

Project-as-Code is a single YAML file fed into a “system” to turn project declarations into business contextual tech stack specific source code along with CI, container, and orchestration config files. That’s a mouthful but worth considering.

By infusing the business context it means no more “Hello World”. Instead, teams start with a deployable application with (likely) tens to hundreds of thousands of lines of scaffolding that are the foundation of a typical project.

What Does It All Mean?

The upfront work required to create a single declarative YAML file would pay massive dividends in saving weeks to months of time to the first meaningful project code commit. Every project has certain things that are more important to work on than others.

We should continue to seek to automate the mundane and predictable to focus more on the critical dev. If so, imagine how much more productive and innovative you, your team, your company, and the industry as a whole would be.

Source link

Why Data Anomalies are More Important Than You Think

November 14, 2019 in Biohacking

It is easy to be annoyed by strange anomalies when they are sighted within otherwise clean (or perhaps not-quite-so-clean) datasets. This annoyance is immediately followed by eagerness to filter them out and move on. Even though having clean, well-curated datasets is an important step in the process of creating robust models, one should resist the urge to purge all anomalies immediately — in doing so, there is a real risk of throwing away valuable insights that could lead to significant improvements in your models, products, or even business processes.

So what exactly do I mean by “data anomalies”? There is no single definition for what constitutes an anomaly, as it depends both on the nature of the data and one’s understanding of the processes generating that data (i.e., anomaly is in the eye of the beholder). They are essentially patterns that deviate significantly from the expected behaviour, leading one to believe that there’s either (1) an error somewhere or (2) a new, unknown cause for the observed deviation. Either possibility should give one pause before hitting the delete button and moving on. If it’s an error, is it random inconsequential noise or a systematic issue somewhere in the process? Could the underlying reason be causing other, less visible issues in the data? If it’s not an error but a new phenomenon, what are its implications? Does it herald a new trend in the market which the business would otherwise miss out? If some of these questions could apply to your data, then anomalies may actually be valuable and deserve to be examined with due care.

At Vortexa we obtain vessel and cargo data from multiple sources in order to generate the most complete view into waterborne oil flows around the world. As in other industries, data quality can vary considerably across different sources, and thus to avoid the infamous GIGO (garbage in, garbage out) we have set up a process to clean and curate each training dataset used by our Machine Learning models. In this post, I describe some lessons we’ve learned as we’ve grappled with some anomalies in our datasets.

Detecting anomalies

Anomalies can be detected using model-free or model-based approaches. Model-free methods rely on a distance metric to identify samples that are “far away” in some sense from other observations within a dataset. Some examples of model-free methods are clustering, nearest-neighbour, and information-theoretic approaches. These methods do not assume a particular structure or distribution in the data, other than the existence of groups of points that are relatively close to one another (clusters) and points that do not seem to belong to any cluster (anomalies). In contrast, model-based methods are based on a set of assumptions about the process generating the data. I will focus on model-based anomaly detection for the remainder of this post.

Let’s start by looking at a classic textbook example of a model-based anomaly detector. In this example, our observations are univariate real numbers which we represent as variable x. If we assume that x is generated as independent random samples from a normal distribution with mean μ and standard deviation σ (i.e. x ∼ N(μ, σ)), then we can define as anomalous all observations that are more than 3 standard deviations away from the mean (i.e., |x-μ| > 3σ). Then, if our assumption is correct, the probability of observing an anomaly by chance is less than 0.3%. If the number of anomalies turns out to be significantly larger than this, we can be certain that they are generated by a different kind of process than represented by our model and need further investigation.

Machine Learning methods can be used to build efficient anomaly detectors. Assuming that one starts with a curated, anomaly-free training dataset D comprised of data points (xᵢ, yᵢ) where xᵢ are feature vectors and yᵢ are class labels, supervised learning methods such as logistic regression, Bayesian networks, and neural networks (among many others) can be used to estimate P(y|x)  — the conditional probability distribution of class labels given a set of features. This estimated distribution will reflect the patterns in D as well as the underlying assumptions in the chosen supervised learning algorithm. This model can be used to detect potential anomalies among new, unseen data points (xᵢ’, yᵢ’) by checking for samples that contain an unlikely class label. In other words, for a given probability threshold τ, anomalies are defined as data points that have class label probabilities below the threshold: P(y=yᵢ’ | xᵢ’)
Anomaly detection is an old problem in statistics and a multitude of algorithms have been created over the years to address it, some of which are more appropriate in specific domains than others. The advantage of the model-based approach proposed above is that it can be readily applied if one has already built a classification model from a curated dataset. If however you do not have an anomaly-free training dataset or your data does not contain categorical output labels, then you may try modifying the approach above (e.g., by using a density estimation method) or using a model-free approach.

Diagnosing anomalies

Diagnosing the underlying issue(s) causing the anomalies is the most valuable step in the clean-up process, but also the hardest. In some cases, it may require deep expertise in the industry or process generating the data as well as a solid understanding of statistics and the assumptions inherent in your model. If you used the model-based approach proposed in the previous section, then all we know is that the detected anomalies deviate from the patterns in the training dataset as captured by the supervised learning method. We now need to understand what may be the cause for this deviation — there are several possibilities:

1. Expected noise in the data-generating process. This is the simplest explanation, and if it is the only reason for the anomalies, then the number of anomalies detected can be estimated theoretically (as in the “classic textbook example” above);

2. Unexpected noise or error in the data-generating process. This may be the case if the number of anomalies is larger than expected. Data processing errors sometimes go undetected, so it is always advisable to inspect the raw data together with the final processed records. Trivial issues in the data can often be identified visually, so eyeballing the anomalous records is usually a good first step.

3. A previously observed feature pattern x’ with a new class label y’. If the feature pattern x’ in the anomalous record has several similar instances x in the training dataset but which crucially have a different class label yy’ attached to them, then this direct contradiction needs to be resolved by a domain expert. If this anomalous record is deemed to be an error, then it needs to be filtered out or corrected. If, however, the anomaly is found to be accurate, then it would signify a shift in the observed patterns (e.g. due to changing market dynamics). The model would have to be retrained with these new data points and additional context so that it can detect the new patterns and adjust its predictions.

4. A new feature pattern x’ not previously observed in the training dataset. When trained correctly, supervised learning models should generalise to unseen patterns. Even if a specific set of features had no equivalent in the model’s training dataset, learning algorithms can extrapolate from the existing patterns in the training dataset and predict the distribution of class labels P(y|x’) for the unseen pattern. If the predicted probability for the class label y’ was low (which caused the record to be flagged as anomalous), then there are two possibilities: (a) the model is correct and the data point x’, y’ is indeed an anomaly — in which case we again need to determine whether it’s a data error or a legitimate shift in pattern (see point 3 above); or (b) the model is wrong and the data point is not an anomaly. When models fail to generalise to unseen patterns, it could be for a number of reasons: insufficient support in the training data, poorly tuned hyperparameters, insufficient set of features, or wrong underlying assumptions (e.g. linearity, independence of factors). A large number of incorrect anomaly predictions may be an indication that the model needs to be revised.

Conclusion

Detecting and diagnosing data anomalies can be challenging, especially as the amount and complexity of data continue to increase seemingly without bounds. A mix of Data Science and industry expertise may be needed to resolve the most complicated cases, when it is not clear whether the model prediction is incorrect, or whether the anomaly reflects a new, real-world phenomenon. Despite its challenges, your organisation could reap enormous benefits by setting up a process to review potential data anomalies periodically. Not only would it keep the datasets clean and models improving continuously, it could provide the business with invaluable early signals of shifts in market dynamics. When seen this way, data anomalies cease being a source of annoyance — they suddenly become a source of opportunities.

Source link

How Developers Win With Low-Code Platforms

November 14, 2019 in Biohacking

Businesses everywhere have stated to see the values low-code has for
their organizations, and have been adapting the technology accordingly.
However, some IT experts and software developers  view the term “low-code” in a negative light, seeing it as synonymous with their jobs becoming obsolete as low-code embraces the idea that anyone can be a developer. This concept makes some professional developers fear that their position will not be needed to the same extent as the low-code trend continues to catch on, or that a lot of the coding they enjoy doing is taken away from them and replaced with a mostly visual, simplified interface. This notion can make developers feel like their role might be less relevant within a company using a low-code platform, or that the coding that they spent years learning to write will vanish within the development process.

This can’t be further from the truth. Low-code should not feel like
a threat to developers – it should feel like a relief. This article takes a
look into why low-code is good news for IT departments, and how low-code can empower developers to be more productive than ever before.

Low-code should not feel like a relief to developers.

IT departments are often overwhelmed. The requirements for software
development within companies is increasing year by year as they digitally
transform and rely more on technology to enhance their processes. This means

more work for IT experts, who already spend up to 

86% of their time

 just maintaining the existing tech within a business

– updates, security fixes, patching, etc. This leaves little room for the creation of new software innovations at a pace the company desperately needs to stay competitive, automate their many processes, and digitally transform.

Not only are IT experts low on time to meet the requirements for new
software solutions, they might not have the time to make the solution “pretty” enough as well. Though they are the go-to experts for everything related to software creation, developers spend the majority of their efforts on ensuring that the solution is error-free. Important aspects of a software solution from the user’s point of view can go unaddressed when time is so limited; aspects like ensuring the app is simple to use and intuitive.

Potentially useful features remain unidentified due to gaps in communication and the need for speedy development.

For traditional software development, this means that in addition to extra time needed to code, test, and re-test, more time will also likely be spent tackling change requests for the solution’s usability and UX. The complicated back-and-forth process of standard development
procedures can be time-consuming and headache inducing for IT experts who already have so much on their plate.
Clearly, developers face several challenges, and low-code can be a valuable asset to help overcome them.

Now is a better time than ever to clear up some misconceptions about low-code technology and outline the advantages low-code platforms have for professional developers.

Firstly, low-code platforms are not exclusively for non-IT experts –
professional developers save time and headache when using a low-code platform. It is true that any user can become a citizen developer and create their own solutions with little to no coding skills, but low-code platforms are also helpful to IT departments who need to create intuitive software at a fast enough pace to keep up with pressing demands. Low-code simplifies the aforementioned process of coding line-by-line, testing, patching, etc. and ensures that IT professionals can keep up with the quickening pace of start-to-finish timelines for creating new solutions. With low-code platforms, the limited capacity left after maintenance to spend time
on developing new apps is more than enough.

Now we can address another non-truth about low-code: that it takes away the need to code altogether. As the name states, there is some coding involved with low-code platforms, just not as much as standard software development. Creation of simple software solutions on low-code platforms does not need extensive coding, and the platform automatically generates and compiles the code needed for many processes during development. However, just because the process is simplified does not mean that the development process and the solutions created cannot be complex and involve code to write. The larger and more complicated software development requirements on low-code platforms still without a doubt need the expertise (and coding) of professionals. Low-code platforms provide the potential to go beyond their drag-and-drop design abilities, putting your coding knowledge to work in the process. Additionally, low-code platforms are best implemented when companies appoint a professional developer to learn the full capabilities of the system; a position that makes

IT experts even more valuable within their company than they already are.

Taking all of this information into account, low-code platforms can be one of the most useful tools IT experts can have to keep up with increasing demands on their departments. Low-code technology is here to stay, and it is not seeking to make enemies with those who love to code. Though these platforms have the ability to empower non-coders with app development capabilities, it also empowers IT departments to create more user-friendly, powerful, and complex software solutions in a fraction of the time that traditional software development takes to do so.
About the author: Katherine Kostereva is CEO and managing partner of Creatio (formerly bpm’online), www.creatio.com, a leading low-code, process automation and CRM company, focused on accelerating marketing, sales, service, and operations for midsize and large enterprises. Katherine Kostereva has bootstrapped Creatio and has grown it to a global software company with offices around the world, a team of 600 engaged professionals and thousands of customers worldwide.

Source link

5 True Intercom Alternatives in 2020

November 14, 2019 in Biohacking

Are you a Customer Success superhero? Perhaps a sales or marketing executive at an e-commerce business or a SaaS founder?

Then you’ve probably tested, used or at least heard about Intercom. It’s an amazing customer platform that enables thousands of online businesses to communicate with and support their customers in many different ways.

But, like any other product, Intercom is not perfect for everyone. There are things that thousands of Intercom’s existing and potential users are missing or having major problems with:

  • High prices
  • Unscalable & unpredictable billing (per number of active users)
  • Not always attentive and helpful customer service
  • Unsatisfying email automation tools
  • Some missing features and long-to-fix bugs

That’s why many Intercom users now start looking for more suitable alternatives to it. And that is why you’ve ended up on this page, isn’t it?

Let’s talk about Intercom first!

I guess before we start looking into Intercom competitors more attentively, let’s talk about what Intercom is. This will give us a clearer understanding of how to better judge its alternatives.

Intercom features

Intercom offers three core packages of features: Lead generation, Customer engagement, and Customer support (formerly known as Acquire, Engage, and Support). 

Each of them includes different combinations of tools tailored for the corresponding purpose. For instance, the ‘Customer support’ kit includes Business Messenger and Team Inbox combined with Help Center Articles, Answer Bot, and Customer Data.

It can be really confusing to even understand what each package, tool, or feature offers in terms of functionality, especially since they all have their super special names in Intercom. It’s like they’re deliberately trying to make it too complicated.

That’s why we decided to break them down and look at the following Intercom’s fundamental features:

  • Live chat (aka Business Messenger)
  • Ticketing (aka Team Inbox)
  • Knowledge base (aka Help Center Articles)
  • Email marketing (aka Outbound Messages combined with Campaigns)
  • Automated and targeted messages via chat (aka Outbound Messages combined with Campaigns).

It should be noted that these are not all features as Intercom also offers one of the best bots out there. But since they cost a fortune and are absolutely unnecessary for smaller businesses, we won’t concentrate on them that much.

It should be noted that these are not all features as Intercom also offers one of the best bots out there. But since they cost a fortune and are absolutely unnecessary for smaller businesses, we won’t concentrate on them that much.

Intercom pricing

So, it’s obvious that Intercom offers tons of features and tools, but how much does all this beauty cost?

If you want to get the cheapest thing from Intercom, you can get their Essential Customer Support subscription for $38/month and it will include only the Intercom live chat. Yep, that’s all you can get for $38/month.

If you need something more elaborate, you will have to take a look at Intercom’s All-in-one subscription, which will include live chat, ticketing, email marketing and chat auto and manual messages. Its most basic crippled version will cost you at least $87/month.

But that is not all you need to pay.

The biggest difference between Intercom and its closest alternatives is that Intercom Messages are priced based on the number of your active contacts. By active they mean users who have been active in the past 90 days. Even if their whole activity was to leave one short message and leave for good.

Most of Intercom alternatives don’t do this. Usually, you only pay a fixed price for the number of support agents you need, that’s it.

All in all, you can never tell how much you’ll end up paying for Intercom. One month, it can be a couple of bucks, the next one, you’ll pay a thousand.

A quick note:
This post was originally published on the HelpCrunch blog by Daniil Kopilevych, Marketing manager at HelpCrunch. The author goes in-depth explaining the pros and cons of HelpCrunch compared to Intercom and other alternatives listed in the article.

Intercom alternatives

After extensive research, we’ve chosen the 5 best Intercom alternatives that have a chance of replacing it for your business: HelpCrunch, Drift, Olark, Zendesk, LiveAgent.

We’ve tested every solution listed above for at least 2 weeks, which gave us a pretty good understanding of how each software may satisfy typical Intercom user’s needs.

1. HelpCrunch – affordable all-in-one Intercom alternative

Back in 2014, while working on another project our team was struggling to find a great all-in-one customer communication software. We used lots of disconnected tools, then switched to Intercom, but it had several major flaws that we couldn’t just get over.

That’s why at some point we decided to build a new all-in-one solution for users just like ourselves – and this is how the story of HelpCrunch’s began.

HelpCrunch features

Among all other Intercom alternatives, HelpCrunch is probably the closest you can get to Intercom in terms of functionality. 

But that is just the beginning. After you’ve installed HelpCrunch on your website, you can tweak other HelpCrunch features:

  • Live chat
  • Ticketing
  • Knowledge Base
  • Email marketing
  • Automated and targeted messages via chat

For instance, if you run a SaaS business, you can easily send targeted in-app and email messages to your users based on custom data that you can transfer directly from your product to HelpCrunch.

Also, in HelpCrunch you can automatically send email follow-ups if your chat messages remain unseen for a particular amount of time. This helps a lot in engaging your customers better and increasing retention.

Thanks to one of the richest set of customization options, you can make the HelpCrunch chat fit your brand style as precisely as needed. From localizing a chat widget for different markets to customizing widget size & color, and changing button style and wallpapers – it’s easy to make it truly yours.

HelpCrunch pricing

Just like in Intercom, you can try HelpCrunch for free for 14 days. Unlike Intercom, you don’t have to leave your credit card for this.

HelpCrunch offers 2 core pricing packages: Live chat and Live chat & Emails. Live chat package already includes the Live chat functionality itself, as well as Auto Messages and chat Ticketing features. 

There are 4 regular plans you can choose from:

  • Standard Live chat – $15/mo/team member
  • Premium Live chat – from $25/mo/team member
  • Standard Live chat + Emails – $29/mo/team member
  • Premium Live chat + Emails – from $49/mo/team member

You can also chat with the HelpCrunch support team about the Enterprise subscription, which can become your custom plan for specific business needs.

HelpCrunch pricing depends on the number of agents you need. You get unlimited chats and contacts on all paid plans.

You can also install the same live chat widget on any number of domains you want. You might also have to pay for additional emails if you need more of them.

HelpCrunch rating

HelpCrunch scores unbelievable 5 / 5 stars on Capterra based on 136 reviews. 
On G2 it also got the highest rating among all other Intercom alternatives – 4.8 / 5. This one is based on 109 reviews.

HelpCrunch vs Intercom

HelpCrunch and Intercom are pretty similar products, especially when it comes to the Messages functionality. But there are some things that make HelpCrunch pretty special – like far richer live chat widget customization options or, and it’s quite obvious, really affordable pricing. 

But the most striking difference between the two is that you get an unlimited number of contacts on any paid HelpCrunch plan – for $15/mo or for $500/mo, it doesn’t matter.

You also get a personal onboarding assistant once you sign up for HelpCrunch — to the point that if you’re migrating from Intercom or any other tool, they’ll migrate your data for you

I personally tried importing contacts from Intercom to HelpCrunch and vice-a-versa – there were absolutely no issues in both instances. In HelpCrunch, you can even bulk edit them which is not possible in Intercom.

Regardless of the feature naming, both platforms allow you to:

    • Talk to your website visitors and users via live chat in real-time
    • Send proactive auto messages on your website to initiate new conversations and convert visitors into leads
    • Send targeted in-app auto messages to nurture and support your existing user base
    • Send manual email campaigns to inform your customers about important company news, product announcements or special deals.
    • Manage all contacts from a single dashboard
    • Create help centers for customers’ self-service.

But in HelpCrunch, you can do all that for the price which will be 3-5 times lower.

Also, HelpCrunch provides some neat multi-channel conversations functionality that Intercom doesn’t, such as:

    • ‘Resend chat message via email if unseen’, which allows you to automatically follow-up via email if a visitor/user doesn’t see your chat message.
    • Quickly switch between sending a new chat message and email for better multi-channel messaging experience.
    • Customer’s message sneak peek, which shows you what customers are typing before they even send a message. 
    • Knowledge base SEO settings where you can specify articles’ meta titles and meta descriptions as well as target keywords for search engines.

To sum up the comparison between Intercom and HelpCrunch, both products are rather similar and offer a familiar customer communication experience. They are especially great for SaaS and E-commerce companies that want to send targeted messages to their customers based on various attributes.

Intercom is an amazing solution for large organizations with big budgets that need a robust package of customer support tools.

In contrast, HelpCrunch is a better choice if you’re looking for a more affordable yet very similar alternative to Intercom without overspending and compromising on quality.

2. Zendesk – support-oriented Intercom alternative

Zendesk is a mature customer support platform that offers a wide range of features, sometimes even more than Intercom.
Only a couple of years ago, Zendesk introduced their live chat software after acquiring Zopim live chat. But their ticketing functionality is at the heart of the whole toolset and has almost no real alternatives.

Zendesk features

Zendesk offers a wide range of features for every use case and pocket.

    • Live chat (Chat)
    • Ticketing (Support)
    • Knowledge base (Guide)
    • Cloud call center (Talk)

Live chat widget has very limited customization options and looks quite outdated, while ticketing functionality is the essence of the Zendesk software and works like a charm. Zendesk ticketing has some advanced options under the hood that large teams will appreciate.

It sure is one of the most reliable tools out there. But god is it complicated. I mean once you sign up, Zendesk’s doing its best to onboard and show you around, but the tool is so complicated that it’s really difficult to just dive in and understand how things work right away.

All in all, every functionality that we tested worked fast and smooth. It feels like you get a very reliable piece of software that won’t let you down even if the UX/UI is not the best in class.

Zendesk pricing

Zendesk offers the longest trial period among other Intercom alternatives and Intercom itself — 30 days.

The pricing for the whole Suite of Zendesk features starts at $109/mo for Professional and $179/mo for the Enterprise plan. 

The cheapest solution you can get is Zendesk’s Support tool for ticketing. It can cost you something from $9/agent/mo to $125/agent/mo.

You can also get free versions of almost every tool including Guide, Chat, and Talk. But note that you can only use them separately – meaning, you can’t use a combination of solutions for free.

What’s also cool about Zendesk’s pricing policy is that you can purchase their tools separately and combine them in any way you need.

The professional plan includes live chat, ticketing, knowledge base and call center tools. And it doesn’t include chat unbranding, customization options, roles and permissions, and other things that come with the Enterprise plan only.

Zendesk rating

On Capterra, Zendesk scores 4.5/5 stars based on 2243 reviews. 
At the same time, G2 separates Zendesk ratings by corresponding tools: Zendesk Support is rated 4.2 / 5, while Zendesk Chat scores 4.3 / 5 stars. Zendesk Talk has the lowest rating of 3.9 / 5.

Zendesk vs Intercom

You can immediately tell the difference in the positioning of two platforms – the first thing Zendesk offered to configure once you log in is to get your support mailbox set up. Intercom, on the other hand, offers to configure your live chat widget first. 

While testing the tools, I also found Zendesk to be a more customer-service oriented product as opposed to Intercom which is great for everything from marketing and sales to customer support.

You can still see that Zendesk Chat is not 100% integrated into the whole Zendesk toolset which may cause some minor issues and inconveniences here and there. When it comes to customization, Intercom definitely offers many more options which also just look more modern.

Also, if you run a SaaS company and want to send in-app messages to your users, that’s not really possible with Zendesk. To add to that, Zendesk doesn’t offer email marketing functionality which could otherwise enable you to send email campaigns to your customer base.

Overall, Zendesk doesn’t feel as modern as it could be both for customer and agent side, while Intercom manages to constantly keep the innovation bar high for the UX/UI of their product suite.

At the end of the day, I found that if I wanted to work most productively I’d need to have all 4 main Zendesk products opened in different browser tabs as there is no option of having all of them within a single dashboard. 

Intercom is definitely more suitable for fast technology companies that need an all-in-one (marketing, sales, support) solution and have a budget big enough to not mind Intercom’s high and ever-changing pricing.

3. Drift – sales-oriented Intercom alternative

Drift takes a bit different approach to customer communication focusing on bot conversations for lead qualification. It’s kind of immediately obvious from the first visit and is stated in the header on their homepage. But what does this really mean? Let’s look into Drift’s features.

Drift features

Drift divides its tools into two categories — Drift for Sales and Drift for Marketing. They also offer a separate package called Drift for Enterprise.

In terms of functionality, it basically has three core features:

    • Live chat (Chat)
    • Targeted messages (Playbooks)
    • Knowledge base (Help)

Additionally, Drift has many features that other Intercom alternative don’t really offer – like integrated calendar for booking meetings, sales video recording tools, automation bots, etc.

One of the most distinguishing Drift’s inventions is something called ‘conversational marketing’, which basically means that you use chatbots and communicate with leads in real time via chat. It illuminates all redundant lead forms and stuff and concentrates on essentials.

Drift pricing

You can’t really test Drift’s paid subscription since it doesn’t offer a standard trial of any kind. But it does offer an extensive free subscription with rich functionality. It includes live chat with 1 agent seat, 150 sales email sequences, 100 contacts and calendar integration.

There are three paid subscription plans available at Drift:

    • Standard for $50/mo
    • Pro for $400/mo
    • Premium for $1400/mo.

All three subscriptions include only 1 seat and their prices are specified for annual subscriptions only.

There was a time when Drift’s pricing could get even more aggressive forcing you to pay extra $10 for additional 1,000 contacts. But this year, they removed restrictions on the number of contacts you can have in your database. So, lucky us, I guess.

Drift rating

Drift is rated 4.5 / 5 based on 111 reviews on Capterra. On G2, it scores similar 4.4 / 5 stars, but this rating is based on 392 reviews.

Intercom vs Drift

Both Drift and Intercom are among the best and most expensive tools in their respective categories. Drift is definitely more suitable for sales-driven organizations with bigger checks and longer lead qualification processes & sales cycles.

I guess the biggest difference between Drift and Intercom is that Drift isn’t really crafted for customer support. They don’t even have anything like ticketing in their software.

However, Drift offers a surprisingly feature-rich free plan with email functionality, which makes it a great solution for small companies that only need one seat and don’t manage many contacts.

Drift’s Free and Standard ($50) plans are pretty great deals for small non-product based businesses that also don’t require ticketing. Note that sending in-app messages and knowledge base integration are only available starting from the Standard plan.

But if you really want to enjoy all those famous sales goodies from Drift including their bots and landing pages, you should go with the Pro plan for at least $400/mo. Service agencies, as well as other B2B organizations targeting enterprise and mid-size businesses are the ones who’ll be really satisfied with Drift’s platform.

SaaS and e-commerce businesses with smaller average checks and higher priority for customer support would be better off looking at Intercom’s solution.

4. LiveAgent – the simplest Intercom alternative

LiveAgent was also founded as a ticketing system aiming to develop an all-in-one help desk software. Now, they truly are extensive and rich in functionality with the main focus on customer support rather than marketing and sales.

LiveAgent features

LiveAgent is refreshingly straightforward about their features unlike all those ‘engage, acquire, grab, convert with landings, bot and whatnot’. So, they offer 4 main tools in their toolset (and they’re all called exactly as they should):

    • Ticketing
    • Live chat
    • Call Center 
    • Knowledge base

All in all, they don’t really offer anything extra or out-of-the-box. It’s the good old ticketing system with live chat and knowledge base.

Two most interesting things about LiveAgent are these real-time notifications about someone’s clicking a chat button and real-time map of all website visitors.

For instance, when your visitor simply clicks the LiveAgent’s chat widget on your website (without sending a message or even typing anything), the system automatically notifies your support agents about a potential chat and connects them to a chat conversation with a visitor. But it also can be quite irritating and time-consuming if you ask me.

LiveAgent pricing

LiveAgent’s pricing is very easy to grasp. There are 3 plans to choose from:

    • Ticket plan for $15/agent/mo
    • Ticket + Chat plan for $29/agent/mo
    • All-Inclusive plan for $39/agent/mo

There is no free version, but you can start a 14-day trial on any of the plans. Super clear and self-explanatory if you ask me.

LiveAgent rating

On Capterra, LiveAgent is rated 4.5 / 5 based on 712 reviews. On G2, it has overwhelming 1,088 reviews with the rating of 4.5 / 5 stars.

LiveAgent vs Intercom

However, the first problems come after installing the LiveAgent’s live chat widget. It doesn’t look nearly as good as other alternatives to Intercom and the customization options are rather limited. 

Chatbot, auto and manual messages via chat/emails are not present in LiveAgent, and UX/UI of the whole platform is just plain old (morally and technically) both for the customer and admin side. This are the biggest downsides of LiveAgent compared to other Intercom alternatives and Intercom itself.

On the other hand, LiveAgent also offers a cloud call center and self-service forums which Intercom doesn’t have.

On the plus side, LiveAgent offers one of the cheapest live chat solutions and most perfectly  balanced integration between its tools. You can chat with your website visitors, call them, respond to and solve tickets, create articles in the knowledge base, start a forum, and check your performance reports – all in the same window.

And I can’t even compare the prices of Intercom and LiveAgent, they are just from different worlds.

5. Olark – chat-only Intercom alternative

Olark is a live chat solution for websites. Strictly speaking, Olark is not your most feature-rich Intercom alternative since live chat is the only big feature it offers. But if that’s all you really need, let’s see what it has.

Olark features

When I say that live chat is the only Olark’s feature, I really mean it. It’s a nice reliable live chat tool, though. It has all the standard live chat things like pre-chat and offline forms, shortcuts, chat rating. But then again, other Intercom alternatives have them, too.

Setting Olark up is very straightforward. There is also a bunch of chat widget customization options. Unfortunately, you’re not able to upload your company’s logo, change widget wallpaper, and else to make it look and feel truly yours.

You can also send targeted and automated messages via chat in Olark.

Olark pricing

Olark offers a 14-day trial on its paid plan and a very limited free version of their software with 1 agent seat and only 20 chats/month. Note that you won’t be able to start your trial without credit card details.

Since they have only one real feature, Olark also offers only one paid subscription plan. For $17/mo for 1 agent seat you get its full functionality. Subscribe for 1 or 2 years in one go and pay $15/mo or $12/mo correspondingly.

You can also purchase Olark add-ons (aka Powerups):

    • Non-branded chatbox for $59/mo
    • Chat translation for $29/mo
    • Visitor co-browsing for $99/mo
    • Visitor insights: $59-99/month

Olark rating

213 reviewers gave Olark only 4.2 / 5 stars on G2, while 445 Capterra users rated it 4.5 / 5.

Olark vs Intercom

Let’s make it clear right away. There is no way Olark can be a full replacement for Intercom for the simple fact of it missing some vital customer support features:

  • No email marketing functionality
  • No ticketing
  • No knowledge base

So, we can only compare their live chat tools. Intercom’s messenger has an edge over Olark’s live chat in:

    • Offering more customization options (like adding company logo, wallpapers etc.).
    • Allowing for integration of your product with the platform, and therefore, giving you the ability to set up business-specific triggers and send automated messages to your users based on those.
On the other hand, Olark can boast of offering some unique PowerUps that are hard to find in any other Intercom alternatives like Chat translation and Visitor co-browsing.

Both platforms perform really well and it’s unlikely that you will face many bugs if any during everyday’s usage.

Final Thoughts

There are tons of great live chat software solutions available on the market.
But when it comes to picking an Intercom alternative that does more than just live chat, you need to dig deeper.

Just sign up for the one you liked the most and test it. Didn’t like it? Move to the next one. I’m sure there’s one Intercom alternative for you somewhere and you will find it very soon.

Source link

How to Start a Cannabis Delivery Business Legally?

November 14, 2019 in Biohacking

According to Fit small business, around 62% of Americans are in favor of legalizing marijuana. That could accelerate marijuana approval rates in the coming years. 

If we glance at other statistics, then As per the Missouri Department of Health and senior services, 2200 business aspirants have applied for a license for starting a cannabis business. They are ready to pay thousands of dollars by looking at the great opportunity in the market.

That’s a huge number if we look at the yesteryears.

Do you have a unique delivery idea on starting a business in the cannabis industry but confused about how to take the first step? 

In this article, I am going to thoroughly explain the process of starting a marijuana delivery business. We will discuss the prerequisites and how licensing plays an important part in getting started. Let’s get started.

What You Will Need to Start a Cannabis Delivery Service?

To build your cannabis business you need to consider some business aspects that will help you get a toehold in the promising market.

Start With the Licensing

In the United States, most states have legalized marijuana for both recreation and medical purpose. Canada, as we know is the forerunner in legalizing marijuana.

Getting your license from the state authority should be your prime focus. So; if you want to start a cannabis business in California, you will have to approach the Bureau of Medical Cannabis Regulation. The agency is responsible for giving license to cannabis retailers, distributors, testing labs, micro-businesses, and temporary cannabis events happening in California.
Different states and different countries have distinct rules and different laws. Hence, go through each rule and licensing procedure before investing in the business. 

Make a Business Plan

Once you get the green signal from the approving authority, you can now make an effective business plan. 

Research on your potential customers. Know what are age groups you want to target. Survey the area of your city where you want to target at the beginning. Decide on whether you want to start an on-demand marijuana delivery app or not. You will need delivery staff for delivering your products to the customer’s doorstep.

Initially, as long as your business has not taken a momentum, you can deliver the cannabis product by yourself.

An Application Will Play Major Role

Now that you are building a delivery service for your marijuana business, you will need a feature-rich mobile application. There are many on-demand apps for various kinds of services in the market. You are providing a marijuana delivery app in a less competitive market. So there are high chances of your business to get success.

You may ask what kind of delivery business should I start?

Your question is obvious because there is more than one option in the delivery business. Some popular options are;

1. Building Only delivery app – If You Cultivate Marijuana

Creating a delivery app for your business will be the most cost-effective idea. All you have to do is cultivate your marijuana and store it at the safe storage. Your marijuana app will list your products with alluring images and consumers will be able to order from the app. The delivery boy will complete the order by delivering the product.

You can maintain your panel which displays incoming orders and completion, staff management, customer details, and inventory.

2. Aggregator App – Works on Commission Based Idea

You must have ordered food from the app like UberEats, and Postmates. What do they do? Restaurants register in the UberEats like app platforms. On every order completion, UberEats gets the commission from the particular order.

So what you will have to do for starting UberEats for cannabis?

Gather the information on local cannabis dispensary owners. Tell them about your platform. Explain to them how the app will work and how the commission system will work. Ask them to register their dispensary store on your app platform. Start with your delivery business. 

3. Online Dispensary

Do you own your brick and mortar store for cannabis? Then, transforming it into an online store will help you get more traction. Your business will get more exposure.

Build a web application and your store’s personal mobile app so that cannabis users can get the products on the go.

Promote

Once you are done with mobile application development, getting your users’ attention should be your prime concern, and that can be achieved through a full-proof marketing plan. As Facebook and Twitter have banned such kind of promotion you will have to find different marketing channels for promotion.

Conclusion

The cannabis industry is growing rapidly, so investing such a business will guarantee you a healthy RoI. As more and more licenses are being proposed, the coming years will have more cannabis-related start-ups and businesses. 

What are your thoughts on this controversial but bright business idea? Comment your opinion.

Source link

4 Salary Negotiation Tips For Expert Tech Talent, From Expert Negotiators

November 13, 2019 in Biohacking

If you’re a developer, engineer, or some other form of an expert technologist, well, you chose a great field. So great in fact that your skill set alone probably gives you negotiation leverage you won’t find in other fields.

Because here’s the truth – the demand for quality tech talent greatly outstrips supply. And we have little reason to expect that to change in the coming years. This imbalance gives you an edge every time you enter a negotiation, whether or not you realize it.

So what to do with this negotiation leverage? The world is your oyster, as they say. In the “Disclaimer” section below, we remind you that there are dozens of job offer items you may want to negotiate into your offer, outside of salary. For a comprehensive list of those items, we recommend checking out a tool we built called the Lifestyle Calculator.
But for this post, let’s tackle the crown jewel of the bunch: Salary. At 10x Ascend, our expertise lies in negotiating the best deals possible on behalf of top-tier tech talent. And historically, some of our greatest successes have come when negotiating salary specifically. One client of ours enjoyed a 100% salary increase, and many others have secured increases north of 50%. We take a lot of pride in helping these folks get the deals they deserve.

Throughout this post we’ll be offering up some of our cornerstone salary negotiation tips and explaining how you, as the tech talent, can incorporate them into your own negotiations. 

It’s often our job to help clients understand how a compensation package can extend far beyond just salary. A well crafted offer is customized to fit a candidate’s needs, and those needs can take many shapes and forms. From equity, to PTO, to remote work, to flex time, to title – you get the idea. It’s never just about salary.

It’s only after explaining this often overlooked reality that we feel comfortable diving into a discussion about salary. So now, let’s get into it.

With the disclaimer behind us, let us concede that of course, salary is important. Your salary shapes a large part of your lifestyle and financial freedom. And as we’ve explained, if you’re in the tech field, there is little reason not to negotiate for a better salary. The leverage is there, you just have to embrace it.

Other than the obvious reasons for why making more money is a positive thing, salary negotiations can have lasting career impacts of which you may not be aware.

Consider also that your career should always be evolving. This goes for title, salary, equity, etc. Ideally your next career stepping stone surpasses your last in some capacity. Salary is a prime example. Your salary often acts as a benchmark for what you should be making in your next role.

Negotiating your salary will keep you on the higher end of your potential earnings. And if nothing else, it will keep you aggressive throughout your career, as your benchmarks will always be on the higher end of the spectrum.

With more than 25 years experience in the negotiation business, we’ve seen it all. Below are a handful of tips we often pass onto our clients, who overwhelming consist of developers, engineers, and expert technologists.

1. Provide a Range When Asked About Salary Requirements

This is what we’ve come to consider the “Golden Rule” of salary negotiations. The reasons for doing this are two-fold.

First, by providing a range, you mitigate the risk of underselling yourself and asking for too low of a number. One of the most common mistakes we hear in this business is that candidates low-ball themselves. It’s a bitter pill to swallow because often there is no going back. Then you’re at the mercy of your own regretful thoughts.

*I wonder what I could’ve made if I asked for more!!*

Don’t be that person. By providing a range when asked about salary requirements (i.e. $150k to 200k or $750k-$850k), you’re effectively taking the possibility for this mistake off the table. And to be extra safe, consider making the low end of your range a number with which you’d still be happy. This way, even if they meet the low end, you’ll be in a good spot.

And second, offering a range opens the door for a continued discussion around the topic of salary. If the job offer hits your inbox and the salary included meets the very low end of your range, you’ll have some leverage. 

You can explain that the only reason your salary dipped so low was because you expected other aspects of the offer to be stronger. PTO days, vacation time, equity, bonuses, for example. The employer can then either make some concessions in other areas of the offer, or they can raise the salary to meet a more attractive point in your range.

Alternatively, if the employer meets the high end of your range, great! Win-win.

2. Master the Art of Discretion

Key to a successful negotiation is understanding that the employer on the other side of the table doesn’t need to know everything. In a nutshell, that’s discretion.

This most frequently applies when deciding what parts of your work history you want to make public knowledge. Revealing some elements of your previous work arrangements can serve you well, while others will not.

We always encourage our clients to be honest when negotiating. Lying is a bad idea. But if the truth reveals something that might harm your negotiation (assuming it’s not critical information), then don’t disclose that information!

For example, maybe you hate your current employer. By revealing this, first, you’re speaking poorly of your employer which will likely frown upon. And second, you’re making it clear you are motivated to pursue a change, which might have otherwise been a leverage point. Someone who does not reveal a less-than-ideal work situation can say things like, “I am in no rush to leave my current situation as there are many things here.”  Or, “I will only make the move once I find the perfect opportunity.” These claims help build leverage because they show you’re not desperate.

Another example is if you felt you weren’t paid appropriately in your previous role. In this case, it’s probably best to lay out your compensation goals for this role without referencing your prior compensation. This is not critical information from an employer’s standpoint. And by the way, in many states it’s actually illegal for employers to ask how much you made in previous roles. You should be informed as to whether or not this applies to you.

3. Pursue Multiple Offers, Even if You Know Where You Want to Work

One of the best ways to create leverage in a salary negotiation is to secure multiple offers. This achieves a number of things.

First, it shows you’re a hot commodity and creates an element of social proof. If other companies want you, the company you’re currently talking to probably will too. You’ve already been validated, and whoever is lucky enough to land you will have to offer an attractive package. This happens all the time in the tech space.

Second, it allows you to be aggressive in your negotiations, knowing you have backup options. Sometimes this extra reassurance is all you need to tackle a negotiation with confidence.

And finally, securing multiple offers gives you highly valuable perspective. Sizing up each offer gives you an idea of what the market is willing to pay. It tells you who’s coming in high and who’s coming in low. This information translates to leverage more often than not. You can explain to those companies coming in low that it’ll be hard to accept those offers, so long as you have better offers on the table. If the company really wants you, they’ll come up.

Also, don’t be afraid to mix and max elements of your other offers to get the best deal. If one company offers a higher base and another offers higher equity, you can reference the two best numbers with a 3rd company to drive their offer to be the best.

4. Sacrifice Other Aspects of Your Offer in the Name of Salary

Ok, full transparency… we don’t often recommend this to our clients. But if salary is far and away the number 1 most important item on your priority list, this might be an effective way to increase your number.

As we explained in the introduction, we’ve identified two dozen different negotiation items worth considering when engaging an employer. Our stance is typically that candidates should focus on non-salary elements as a means for negotiating better, more complete deals. But again, if salary is all important for you, this thinking can be reverse-engineered to your advantage.

Each negotiation element represents a pocket of value. Stock options, vacation time, the ability to work remotely, etc. These all add value to your offer.

So in addition to a base salary, if your offer comes with a certain allocation of stock options, for example, negotiating those out of the offer in exchange for a higher salary might be possible. Here, you’re simply trading in equity value for salary value.

This might make sense for someone less willing to wait to be paid. Equity for example often requires a vesting period. If you need cash now, perhaps you negotiate for a better salary instead of equity, so you more immediately see larger sums of money.

We’ll conclude by once again zooming out a bit. While we’ve outlined a handful of specific salary negotiation tips in this post, the ultimate negotiation hack is understanding the bigger picture. And for quality tech talent, the bigger picture illustrates a highly favorable negotiation landscape.

We’re led to believe that conducting a great negotiation requires one to be a talented wordsmith, a quick-witted thinker, or some kind of strategic mastermind. But that’s not the case. Because in today’s tech landscape, your resume does the talking.

Of course, some prefer the legwork to be handled by professionals, which is exactly what we do at 10x Ascend. We hope this post empowered you to become a better negotiator on your own. But if you want some back up in your next negotiation as many do, don’t hesitate to reach out. We’d love to hear from you. Best of luck!

Source link

Skip to toolbar