Predicting Personality Traits from Content Using IBM Watson

Would you like to know what your customers want before they do?
Of course. Every business wants to be able to anticipate the needs of their customers.
And that starts by understanding customers in a deep and meaningful way.
Unfortunately, that is getting harder and harder in our digital world…

  • Consumers are increasingly less trusting and loyal
  • Consumers have more power — thanks to social media, online shopping comparisons, and a ever-growing list of choices
  • With all the noise out there, it has become increasingly difficult to get an accurate view of your customers
  • Consumer mindsets are shifting from product ownership to experience due to economic uncertainty and information overload

All of this certainly highlights the need to create a more compelling customer experience to stay relevant.

”Customer experience is the new competitive battlefield.”
~ Gartner

But to stay close to this new digital consumer, you really need to get a view into their mind.
Fortunately, artificial intelligence is stepping up to the challenge. Giving you the ability to turn all those digital transformations into digital advantage.
We can use an A.I. platform to instantly analyze and predict the values, needs and behaviors of your customers. A platform which can provide a wide range of uses — from precision marketing to product recommendations to ultra-personalized emails.
So let’s build one!

Customer Research…A.I.-Style

This application (originally developed by IBM’s Watson Developer Cloud) uses a content analysis service to analyze social media content and provide you with insights about the personality traits of the content’s author.
Want to see some source code? Here’s our fork of the application on GitHub.

Let’s get to it…

The end result

By following the steps in this guide, you’ll create an application similar to the following.

Here’s a live preview.
The application currently supports two content sources:

  • Custom content — Copy/paste any type of text content
  • Twitter stream — Classification of tweets in a particular account

It even gives you a nice visual with the results…

And of course, this is just the beginning — the application can be extended in any number of different ways. The only limit is your imagination.

How it works.

The application analyzes content and intelligently analyzes the core topics and tone of the content, then predicts personality traits of the author based on that analysis.

And it uses one cloud-based service from IBM Watson:

  • Personality Insights — Classifies the content and makes a prediction for the corresponding personality traits

Step 1: Requirements

Before we create the Watson service, let’s get the system requirements covered.

Download the source repository.

To start, go ahead and download the source files.

Note: You’ll need a git client installed on your computer for this step.

Simply move to the directory you want to use for this demo and run the following commands in a terminal:

terminal
# Download source repository git clone https://github.com/10xNation/ibm-watson-personality-insights.git cd ibm-watson-personality-insights

At this point, you can keep the terminal window open and set it aside for now…we’ll need it in a later step.

Name the application.

Right away, let’s nail down a name for your new app:

manifest.yml
...   # Application name - name: xxxxxxxxxxxxxxx ...

Replace xxxxxxxxxxxxxxx in the manifest.yml file with a globally unique name for your instance of the application.
The name you choose will be used to create the application’s URL — eg. http://personality-insights-587854.mybluemix.net/.

Create a Bluemix account.

Go to the Bluemix Dashboard page (Bluemix is IBM’s cloud platform).

If you don’t already have one, create a Bluemix account by clicking on the “Sign up” button and completing the registration process.

Install Cloud-foundry.

A few of the steps in this guide require a command line session, so you’ll need to install the Cloud-foundry CLI tool.

Open a terminal session with Bluemix.

Once the Cloud-foundry CLI tool is installed, you’ll be able to log into Bluemix through the terminal:

Note: Feel free to use the same terminal window as above.

terminal
# Log into Bluemix cf api https://api.ng.bluemix.net cf login -u YOUR_BLUEMIX_ID -p YOUR_BLUEMIX_PASSOWRD

Replace YOUR_BLUEMIX_ID and YOUR_BLUEMIX_PASSOWRD with the respective username and password you created above.

Step 2: Create the Application Container

Go to the Bluemix Dashboard page.

Once you’re signed in and see your Dashboard, click on the “Create app” button.

In this demo, we’ll be using a Node application, so click on “SDK for Node.js.”

Then fill out the information required, using the application name you chose in step #1 — and hit the “Create” button.

Set the application memory.

Let’s give your application a little more memory to work with.

Click on your new application.

Then click on the “plus” sign for “MB MEMORY PER INSTANCE” — set it to 512 — then hit “Save.”
That’s it for the application container, so let’s move onto the service instance.

Step 3: Create the Personality Insights Instance

To set up the Personality Insights service, go back to your Bluemix Dashboard page.

Then click on your application.

And that should take you to the Overview section of your application dashboard. Since this is a brand new application, you should see a “Create new” button in the Connections widget — click that button.

You should now see a long list of services. Click “Watson” in the Categories filter and then click on “Personality Insights” to create a new instance of that service.

Go ahead and choose a Service Name that makes sense for you — eg. Personality Insights-Demo. For this demo, the “Lite” Pricing Plan will do just fine. And by default, you should see your application’s name listed in the “Connected to” field.
Click the “Create” button when ready. And If needed, update the Service Name you chose in the manifest.yml file:

manifest.yml
...   services:     # Service name   - Personality Insights-Demo ...

Just replace Personality Insights-Demo with your chosen Service Name.
Feel free to “Restage” your application when prompted.

Enter service credentials.

After your Personality Insights instance is created, click on the respective “View Credentials” button.

And that will pop up a modal with your details.

Copy/paste your Personality Insights service username and password into the .env file:

.env
... # Service credentials PERSONALITY_INSIGHTS_USERNAME=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx PERSONALITY_INSIGHTS_PASSWORD=xxxxxxxxxxxx ...

Replace xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx with your username and xxxxxxxxxxxx with your password.
Your Personality Insights service is now ready, so let’s move onto Twitter.

Step 4: Create the Twitter App

Go to the Twitter Apps home page.

Log in with the Twitter user account you plan to use with your new app.

After logging in you should see a “Create New App” button…click it.

Enter a globally unique name for your Twitter application, a brief description, and the URL to your website.

Click on the “Keys and Access Tokens” tab.

Copy/paste your credentials into the .env file:

.env
... # Twitter credentials TWITTER_CONSUMER_KEY=xxxxxxxxxxxxxxxxxxxxxxxxx TWITTER_CONSUMER_SECRET=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx ...

Replace xxxxxxxxxxxxxxxxxxxxxxxxx with your consumer key and xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx with your consumer secret.

Step 5: Fire it Up!

That’s it for the hard stuff. So let’s put this app to work.

Launch the application.

To bring the application to life, simply run the following command — making sure the terminal is in the repository directory and logged into Bluemix:

terminal
cf push

This command will upload the files, configure your new application and start it.

Note: You can use the cf push command to update the same application after it’s originally published.

Take a look.

After the server has started, you’ll be able to open the application in your browser at the respective URL.

The application should look something like this…

Play around with it and get a feel for the functionality.

Troubleshooting

If you’re having any problems with the application, be sure to check out the logs…

Just click on the “Logs” link within your application dashboard.

Take it to the Next Level

But this is just a start. The real power comes when you apply this type of analysis across all your customers’ content and brand interactions — giving you an insightful view into their thinking and habits.
How will you build on this web service? Run it separately — or better yet — integrate it with your existing marketing and hiring tools (CRM, social media automation, etc.).
And currently, you can make 1,000 API calls per month for free — so go ahead and just start playing with it!
You can dig deeper into the entire suite of Watson APIs in the IBM Watson Developer Community.
Enjoy!

This post is part of our How to Apply A.I. in Your Business blog series. Be sure to check out our past issues:

And be on the lookout for future issues, they come out every other Monday.

Amazon’s Alexa vs. Google’s Assistant: Same Questions, Different Answers

Amazon’s Echo and Google’s Home are the two most compelling products in the new smart-speaker market. It’s a fascinating space to watch, for it is of substantial strategic importance to both companies as well as several more that will enter the fray soon. Why is this? Whatever device you outfit your home with will influence many downstream purchasing decisions, from automation hardware to digital media and even to where you order dog food. Because of this strategic importance, the leading players are investing vast amounts of money to make their product the market leader.
These devices have a broad range of functionality, most of which is not discussed in this article. As such, it is a review not of the devices overall, but rather simply their function as answer engines. You can, on a whim, ask them almost any question and they will try to answer it. I have both devices on my desk, and almost immediately I noticed something very puzzling: They often give different answers to the same questions. Not opinion questions, you understand, but factual questions, the kinds of things you would expect them to be in full agreement on, such as the number of seconds in a year.
How can this be? Assuming they correctly understand the words in the question, how can they give different answers to the same straightforward questions? Upon inspection, it turns out there are ten reasons, each of which reveals an inherent limitation of artificial intelligence as we currently know it.
Case No. 1:
Question: How many seconds are in a year?
Amazon Alexa: One year equals 31,557,000 seconds.
Google Assistant: One calendar year equals 31,536,000 seconds.
Question: Who designed the American flag?
Amazon Alexa: The American flag’s designer is Robert Heft.
Google Assistant: According to popular legend, the first American flag was made by Betsy Ross, a Philadelphia seamstress who was acquainted with George Washington, leader of the Continental Army, and other influential Philadelphians.
One does not have to look far to find examples like this. What seems like a straightforward question gets two very different answers. In these examples, the systems interpret the question differently. Regarding the seconds-in-a-year question, the difference is whether we are talking about a calendar year (365 days) or a solar year (365.24 days). Google at least qualifies its answer, but you have to be paying attention to even notice that.
With regard to the flag, the situation is similar. Betsy Ross is credited with designing the original U.S. flag, a fact known to all grade school children. But who is Robert Heft? Turns out he designed the current fifty-star configuration.
What do we make of these? Do we say the questions are poorly formed and ambiguous? Not at all. These examples illustrate a huge shortcoming in the state of the art right now. While these devices, and AI in general, may be able to parse the words you say into a coherent question, they don’t “understand” the question the same way a human does. A human might well say, “Well, do you mean a calendar or solar year?” or “The original flag or today’s flag?” These devices are nowhere near that sophisticated. Their understanding of the world is binary, if you will excuse the pun. They understand the question or they don’t. This is a variant of the classic “common sense” problem which has been identified by AI researchers since the beginning, a problem we are nowhere close to solving.
Takeaway: The systems can understand the words but not the ambiguity within the question. Thus, with AI systems it is important to frame questions as specifically as possible, but this makes it harder for them to understand, because every additional word you use adds additional complexity for them to parse.
Case No. 2
Question: How many countries are there in the world?
Amazon Alexa: The world has 192 UN-recognized countries.
Google Assistant: According to infoplease, there are 196 countries in the world today.
Case No. 2 illustrates the different methodologies each system uses to answer questions. Amazon pulls many of its answers from Evi, a knowledge base it acquired in 2012 that has a repository of hundreds of millions of answers. Google, on the other hand, pulls many of its answers straight from the web. In fact, you know how sometimes you do a search in Google and the answer comes up in snippet form at the top of the results? Well, often Google Assistant simply reads those answers.
There are implicit tradeoffs in each approach. On the one hand, Google will answer many more questions than Amazon will, but the answers aren’t curated, or at least are only algorithmically curated. In other words, they may be wrong, as we will see in a bit. On the other hand, using a curated repository may give you more accuracy but fewer answers and, more important, a data set that ages and thus degrades over time.
That’s what is going on here. Google cites its source, infoplease, which reports that there are 196 countries. This is 193 UN-recognized countries plus the Vatican, Taiwan and Palestine.
So, why does Google report 193 UN-recognized countries, but Amazon only 192? This is due to the aforementioned inherent flaw in Amazon’s approach, or at least Evi’s. Evi’s exact answer appears here: https://www.evi.com/q/how_many_countries_are_there_in_the_world. The data is about six years out of date, though. South Sudan became the 193rd nation added to the UN on July 14, 2011.
Takeaway: No flawless methodology exists for building and maintaining a knowledge base.
Case No. 3
Question: What is the fastest animal?
Amazon Alexa: The fastest non-human animal is a peregrine falcon.
Google Assistant: Popular on the web cheetah, lion, wildebeest and others.
Question: How fast is the fastest animal?
Amazon Alexa: The fastest non-human animal’s top speed is 240 mph.
Google Assistant: Here’s a summary from Wikipedia: Fastest land animal. Fastest feline. The cheetah can accelerate from zero to 96.6 kph in under three seconds.
These cases show where the Google approach gives inferior answers. If you type the first question into Google, you get a “Popular on the web” snippet with photos of several candidates. Google just reads this, even omitting any kind of pause after “web” and before “cheetah.” To top it off, the correct answer isn’t even in the list it reads, and appears 10th in the list of animals.
In the second question, you also don’t get the correct answer from Google. Instead, you get an excerpt from Wikipedia that doesn’t answer the question.
Takeaway: Even a simple question can have all kinds of complexity in finding and delivering the answer.
Case No. 4
Question: How many countries are there in the world?
Amazon Alexa: The world has 192 UN-recognized countries.
Question: How many countries are there?
Amazon Alexa: There are 193 UN-recognized countries.
Notice that both of these answers are from Alexa. They are different answers to what is essentially the exact same question. This is particularly disturbing, because there isn’t even a nuanced difference between these questions. The fact that Alexa mentions 193 UN-recognized countries for the first time suggests parts of its knowledge base are updated, and parts aren’t. That’s fine, as far as it goes, but that is not the core problem. The problem is that it “hears” these as different questions.
Takeaway: These systems can understand each word you say but misunderstand the question in profound ways.
Case No. 5
Question: How much does a photon weigh?
Amazon Alexa: Photon’s average weight is 0 ounces (0 grams).
Google Assistant: According to the Guardian, if it has energy, according to Einstein’s famous equation of general relativity, it has mass: M = E/C2. We know also that light is bent by gravity. So if we think of light as being particulate, a stream of photons, it must weigh something, otherwise gravity wouldn’t have any effect upon it.
So Alexa says it has no weight and Google says it must have weight. Alexa’s answer comes from Evi, and the Evi entry cites Wikipedia. Wikipedia doesn’t say this exactly, however. It says, “The photon has zero rest mass and is always moving at the speed of light.” But the answer is effectively the same.
The Google answer comes word-for-word from the snippet, which links back to the Guardian article. A nitpick: Google reads M = E/C2 as “M equals E C 2” not “M equals E divided by C squared.”
Question: What caused the Civil War?
Amazon Alexa: The American Civil War started when the Southern states seceded, forming the Confederate States of America, and attacked Fort Sumter in South Carolina. The war started over irreconcilable political differences between the North and the South, including the spread of slavery and the power of the centralized federal government.
Google Assistant: According to Civil War Trust, while many still debate the ultimate causes of the Civil War, Pulitzer Prize-winning author James McPherson writes that, “The Civil War started because of uncompromising differences between the free and slave states over the power of the national government to prohibit slavery in the territories that had not yet become—”
The Google answer is cut off because the snippet on google.com is cut off at that point, presumably for length. The quoted paragraph, and answer, goes on another 84 words.
Question: Is time travel possible?
Amazon Alexa: The theories of special and general relativity allow for the hypothetical possibility of time travel if suitable space-time geometry or motion of space existed. However, time travel is not possible at this time.
Google Assistant: According to physics.org, but there may be an out to be found in general relativity, Einstein’s theory of gravity that unites space and time as “spacetime,” which curves in the presence of mass.
A couple of things are going on in Google’s case on the time travel question. First, the snippet is missing some context. What Google should have included from the physics.org piece are some earlier words on the page that say how traveling back in time would require going faster than the speed of light, which isn’t possible. With that lead in, the Google snippet that begins with “but there may be an out to be found…” makes total sense.
But there is another issue as well. When I started working on this article in May, the above answer was given by Google Assistant. Now, in preparation for publication, we re-checked all of the questions, and lo and behold, Google Assistant demurs with the phrase: “Sorry, I don’t know how to help with that.” Now, the possibility of time travel hasn’t changed in the intervening few weeks, nor has there been a scientific breakthrough that has altered our understanding of how one might travel in time, nor has physics.org revised the page. Instead, a change on Google’s part, either algorithmically or data based, has removed the physics.org snippet. Presumably, a future revision may bring that answer back, or a different one. This isn’t a problem per se, since we all expect Google to get ever better over the years. Rather, it is indicative of those shifting sands upon which Google Assistant rests. For better or worse, don’t expect the answer you get today to match the one you get tomorrow.
Takeaway: The main point of Case No. 5 is that that these products give very short answers to very complicated questions. Some things, many things in fact, cannot be reduced to just a soundbite (or soundbyte) of an answer.
Case No. 6
Question: What’s the current temperature of Austin, Texas?
Amazon Alexa: Currently, in Austin, Texas, it’s 71 degrees with clear skies and sun. Tonight, you can look for clouds with a low of 69.
Google Assistant: The temperature in Austin right now is 75.
In this case, Alexa’s answer does not come from Evi. Evi says, “Sorry, I don’t yet have an answer to that question.” A look around the web surfaces a help page on Amazon that explains that Alexa uses AccuWeather for the latest weather information. Google sources its weather snippets from weather.com. Four degrees is a pretty stark difference given that this should be a pretty straightforward fact. I periodically checked on other days, and while occasionally they gave the same temperature, they were almost always a little different.
Takeaway: Data is seldom fungible, even if it looks like it should be.
Question: What’s the population of Austin, Texas?
Amazon Alexa: The population of Austin, Texas, is about 913,000.
Google Assistant: The population of Austin was 885,400 in 2013.
Alexa is pulling this data from Evi, which in turn is referencing both Wikipedia and Austin.gov. But the data is outdated. Wiki now says the population is 931,830. The best I can tell is that the 913,000 number comes a July 2014 Census Bureau estimate. Google’s snippet cites an older U.S. Census Bureau number, so its data is more outdated than Alexa’s. But Google gets bonus points for giving an actual date along with the number.
Question: How many people are there in the world?
Amazon Alexa: The population of the world is about 7,400,000,000.
Google Assistant: The population of Earth was 7.125 billion in 2013.
Once again, Alexa is using Evi. It cites and links to Wikipedia, but the article it links to doesn’t include an actual population number, as it is the Wikipedia entry for “World,” which simply says, “World population is the sum of all human populations at any time.” (Duh!) The actual article about world population on Wikipedia says, “As of March 2017, it was estimated at 7.49 billion,” so Alexa is outdated. Google pulls its figure from the World Bank, but that data is not updated every year, so this information is outdated as well. But again, kudos to Google for adding the year.
Takeaway: Different high-quality data sources can still give you different data. Transparency, such as including citations and dates, helps a bit.
Case No. 7
Groucho Marx hosted a quiz show called “You Bet Your Life” throughout the 1950s. If contestants weren’t doing well, Groucho would take pity on them and throw them a puffball question: “Who’s buried in Grant’s Tomb?” All they had to do was say “Grant” and they won some cash.
This answer, strictly speaking, is not entirely accurate for two reasons. The first “gotcha” is that the tomb contains both Ulysses Grant and his wife, Julia. Second, no one is buried in a tomb. They are entombed in it, in sarcophagi that are above ground. Thus was born a pretty tricky riddle I remember from my elementary school days back in the 1970s. How do Alexa and Google Assistant fare with it?
Question: Who’s buried in Grant’s Tomb?
Amazon Alexa: Grant’s Tomb’s dead occupants are Ulysses S. Grant and Julia Grant.
Google Assistant: Ulysses S. Grant was buried in General Grant National Memorial, New York City, New York.
The Alexa answer appears to be cognizant of the “gotcha” in the answer but avoids splitting the hairs of the method of interment. So while Alexa makes a factually true statement, it would leave the questioner with the impression that someone is buried there. But all in all, it’s a pretty good answer. Google’s answer, less so, but Groucho would have still paid up.
Question: What’s the nearest star?
Amazon Alexa: The nearest star is the sun.
Google Assistant: Proxima Centauri is 4.243 light-years from Earth.
Again, trick question, but we have to give the point to Alexa. However, coaxing the probably intended answer from Alexa—Proxima Centauri—is tricky, and I couldn’t figure out a way to do it. If you really wanted to know the closest star not counting the sun, it would probably get pretty annoying with Alexa answering “the sun” over and over. In fact, my attempts to get Proxima Centauri from Alexa revealed a pretty epic fail. I asked, “What is the closest star to the sun?” and received the reply that “Mercury is the closest planet from the sun, as its average distance is 57.9 kilometers.” Thinking it must have misheard me, I repeated the question a number of times before checking the Alexa app which logs my questions. It heard me correctly. Needless to say, Mercury is not 57.9 kilometers from the sun, for that’s barely more than you run in a marathon. I asked Google Assistant, which informed me that Mercury is 35.98 million miles from the sun, a distance more than slightly longer than 57.9 kilometers. Oddly, if you ask Alexa, “How far is Mercury from the sun?” you get the correct answer.
Takeaway: These systems don’t understand nuance in the least. Unless explicitly programmed with “the answer,” they can’t in and of themselves break out of their narrow algorithmic methodology. Only straightforward questions can expect an answer, and even then, it is subject to the other challenges we are exploring.
Case No. 8
Question: Is Bigfoot real?
Amazon Alexa: According to Wikipedia, scientists discount the existence of Bigfoot and consider it to be a combination of folklore, misidentification and hoax rather than a living animal.
Google Assistant: Based on fur samples, DNA and a bunch of shaky videos, I’d say the truth is waiting to be found.
Question: Are eggs healthy?
Amazon Alexa: Yes, all eggs are healthy.
Google Assistant: According to Authority Nutrition, eggs are loaded with high-quality proteins, vitamins, minerals, good fats and various trace nutrients.
The Bigfoot question sort of speaks for itself. The egg one, however, is a little weird. I asked Alexa about milk being healthy and got, “Sorry, I couldn’t find the answer to your question.” Then I asked about cheese. Same answer. Then sugar. Same answer. Then jogging. Same answer. Then vegetarianism. Same answer. Then I asked about eggs, and I got the absolute, unqualified blanket statement that “All eggs are healthy.” Wow. Does that include Cadbury eggs, python eggs and spider eggs? Evidently. But even putting that aside, this answer comes from Evi, which links to a Wikipedia article that makes no such claim at all. It does, however, contain phrases such as, “There is debate over whether egg yolk presents a health risk.” And, “Studies have shown conflicting results about a possible connection between egg consumption and Type 2 diabetes.”
Takeaway: When asked opinion questions about which there is no widely agreed-upon objective answer, these systems can be as opinionated as anyone.
Case No. 9
Question: Who invented the lightbulb?
Amazon Alexa: Light bulb’s inventors are Thomas Edison, Humphry Davy, Joseph Swan, Lewis Howard Latimer, Henry Woodward and Mathew Evans.
Google Assistant: Incandescent lightbulb was invented by Thomas Edison, Joseph Swan and Hiram Maxim.
It all boils down to what “invented” means. We can infer that Alexa and Google use a different definition. Alexa named everyone who ever had a patent on it, it seems, except Maxim, who had a patent dispute with Edison over the bulb. Google just named Edison, Sway and Maxim, who were contemporaries and competitors.
Takeaway: These systems make assumptions about the meanings of words that they do not explain. Unlike a human, of whom one could ask, “What do you mean by ‘invented’?” the systems don’t have a mechanism to give you visibility into what they mean by different words. For instance, they don’t start with, “If by invented you mean ‘filed for a patent’ then …”
Case No. 10
Question: What rhymes with orange?
Amazon Alexa: Well, Blorenge sort of rhythms with orange. If you happen to be writing a poem about a prominent hill in Wales, you’re in luck.
Google Assistant: According to Oxford Dictionaries, the only word in the 20-volume historical Oxford English Dictionary that rhymes with orange is sporange, a very rare alternative form of sporangium.
Sometimes they just have different answers for any number of reasons, often which cannot be identified with any certainty. This is the catch-all case, where the answers are simply different.
Takeaway: Sometimes a root cause is unclear, so even if none of the other cases apply, the answers can still be different.
What is the overall conclusions to be drawn from all of this? A few things pop to mind:
First, articles like this that focus only on what systems get wrong can give the incorrect perception that the systems aren’t very good. This is emphatically not the case. I love both of these devices and use them every day. The purpose of this piece is not to disparage these products, but to explore their pitfalls and limitations so that they can be used intelligently.
Second, this is a new category, just a few years old. We can and should forgive them their rough edges and can be certain that these products will get substantially better over time.
Third, these devices have a huge range of additional functionality unrelated to questions and answers that are beyond the scope of this piece. I would say overall that their various other features are much further along than the Q&A part.
Fourth, the biggest takeaway is just how hard AI is. Transcribing natural language is only the first step, comprehending all of the nuance is incredibly difficult, and we are still a long way away.


Special thanks to Christina Berry, Gigaom’s Editorial Director, who ran down all of the sources for the answers to all of the various questions and helped figure out what was going on in each of the ten cases.

The Future of Business is a Digital Spokesperson — Let’s Build a Preview Using Microsoft’s Bot Framework

Everyone loves a great conversation.

And the conversational U.I. is coming on quickly.

Chatbots have been all the rage for the past couple of years — and the technology is quickly catching up to the hype.

As our trust in artificial intelligence grows, so too does our faith in letting it do more and more important work for us.

And what’s more important than interacting with your customers?

Well, A.I. is ideally suited for providing your customers with the personalized interactions they want — without you needing to hire a small army to man a call center.

Rise of the Conversational U.I.

The conversational user interface is about self-service — a major trend going back to ATMs. Self-service allows customers to solve problems on their own terms — while simultaneously reducing your costs.

The conversational U.I. is also a richer interaction — compared to websites. Providing your customers with a more natural and comfortable experience, while giving your business a deeper understanding of that customer’s emotions and urgency during the interaction.

Era of the Digital Spokesperson

If you project this trend out, it leads us to a point where A.I. could become the entire front end for your business. And in-fact, the consulting firm, Accenture, is already predicting exactly that.

Trend #1 for Accenture’s “Technology Vision 2017” is A.I. is the new U.I., anticipating…

“A.I. is making every interface both simple and smart…A.I. is poised to act as the face of a company’s digital brand and a key differentiator.”

So let’s explore what it takes to build out a chatbot capable of spanning the wide range of digital channels. And for the purposes of this guide, we’ll be using Microsoft’s Bot Framework to build it.

A.I.-Powered Conversational Interface

Without any further ado, let’s jump right into it…

That may look like a lot of steps, but it’s actually pretty easy to do.

The end result

This guide will provide you with the perfect starting point — a simple bot, running in the cloud, that responds to your interactions.

What you do with it from there is where the real magic begins — adding communication channels, enhancing functionality, and deepening conversation dialog.

How it works.

The architecture is pretty simple, but there are a few moving parts.

The bot itself is only a single file (written in Node.js), which we will store in a GitHub repository. And then we’ll use that GitHub repo as a deployment source for a Web App on Azure.

And from there, the Bot Framework Connector Service will allow you to wire up the bot to a wide range of different communication channels.

Ready to go? Let’s get to it…

What You’ll Need

Right off the bat, let’s get the initial requirements knocked out.

Install Node.js

To start, this application is written in Node.js, so you’ll need to install it. You’ll also need Node’s package manager — npm — which should be included with the Node.js installation.

Download the source repository.

Next, let’s pull down the source files. (You’ll need a git client installed on your computer for this step.)

Move to the directory you want to use for this guide and run the following commands in a terminal:

# Download source repository
git clone https://github.com/10xNation/microsoft-bot-framework-nodejs.git
cd microsoft-bot-framework-nodejs

The repository only includes a couple of files — so don’t blink.

Create an Azure account.

Go to the Azure home page (Azure is Microsoft’s cloud services platform).

If you don’t already have an Azure account, go ahead and create one by clicking on the “Free Account” button and completing the registration process.

Install the Bot Framework emulator.

You’ll also need to install the Bot Framework emulator.

The emulator is a desktop application that allows you to test and debug bots on your local machine (or via tunnel). It runs on Windows, Mac, and Linux. Just follow the provided directions to install — it’s super easy to use.

Step 1: Create the Repository

Next, go to the GitHub home page and register for a new account if you don’t already have one.

Once you’re signed in, click on the plus sign and “New repository.”

Enter a name for your new repo and hit “Create repository.”

Next, we’ll spin up the app…

Step 2: Create the Application

We need an Azure’s Web App instance to power the bot in the cloud, so go to your Azure Dashboard and sign in with your Azure account.

Click on the “+ New” button.

Then select the “Web + Mobile” and “Web App“ options.

On the Web App Create page, enter an “App name” — select a “Subscription,” — then create or select a “Resource group.” And the system should automatically create/choose an “App Service plan/Location“ for you.

Once everything is filled out, hit “Create.”

And after a few minutes, the new subscription will show up on your dashboard. Go ahead and click it (the App Service).

Configure the deployment options.

That should take you to the Overview tab for your new app.

Click on the “Deployment options” tab.

And then click on “GitHub.”

Go ahead and follow the prompts to authorize Azure to access your GitHub account / organization.

And hit “OK”

Hit “Choose project” and select the repo you created in step #1. If desired, select a branch — we’ll stick with master for this demo.

Once the GitHub repo is fully configured, click “OK.”

And that’s it for the web app, so let’s spin up the bot…

Step 3: Register the Bot

For your bot to be publicly accessible, you’ll need to register it with the Bot Framework platform, so go to the Developer Portal and sign in with your Microsoft account (same as Azure portal above).

Click on “My Bots.”

And “Register.”

Then jump back to your Azure App Service Overview tab from step #2 and copy the “URL.” Paste that URL into the “Messaging endpoint” field and add on /api/messages (see below).

Also enter a “Display name,” “Bot handle,” and “Long description.” Then click on “Create Microsoft App ID and password” to get your app ID.

Click on “Generate an app password to continue.”

And “Ok.”

Then “Finish and go back to Bot Framework.” And that will take you back to the Bot registration page with a pre-populated “app ID.”

All you have to do now is verify your email address in the “Owners” box, agree to the terms statement, and hit “Register.”

That should give you a simple success prompt. So hit “OK.”

And at this point you should see a dashboard for your shiny new bot.

So let’s put down some code for this new bot…

Step 4: Build the Bot

When it comes to software development, this is about as easy as it gets.

Configure Node.js.

Spin up the application settings using the command below:

Terminal
npm init

Enter responses to the screen prompts as desired — defaults are fine.

Install the Node.js modules.

Next, let’s install the Node.js requirements:

Terminal
npm install --save botbuilder
npm install --save restify

That should only take a few seconds, so let’s move onto where the magic really happens…

Create the bot application.

The repo you cloned in What You’ll Need already includes this file, but below is the code if you’d like to see a preview:

app.js
var restify = require('restify');
var builder = require('botbuilder');
// Setup Restify Server
var server = restify.createServer();
server.listen(process.env.port || process.env.PORT || 3978, function () {
  console.log('%s listening to %s', server.name, server.url);
});
// Create chat connector for communicating with the Bot Framework Service
var connector = new builder.ChatConnector({
  appId: process.env.MICROSOFT_APP_ID,
  appPassword: process.env.MICROSOFT_APP_PASSWORD
});
// Listen for messages from users
server.post('/api/messages', connector.listen());
// Receive messages from the user and respond by echoing each message back (prefixed with 'You said:')
var bot = new builder.UniversalBot(connector, function (session) {
  session.send("You said: %s", session.message.text);
});

Ready to see the bot in action?

Step 5: Test It

Then let’s test your bot by using the emulator you installed in What You’ll Need.

Start the bot.

Move to the repo directory from What You’ll Need and run the following command in a terminal to start your bot:

Terminal
node app.js

This will run the application on the port we listed in app.js file — 3978.

All you’ll see is a simple listening notice.

Connect to the emulator.

Once your bot is running, fire up the bot emulator and enter the URL
http://127.0.0.1:3978/api/messages, and your App ID and password.

Then hit “Connect.”

Have a chat.

Now that your bot is running locally and is connected to the emulator, try out your bot by typing a few messages in the emulator.

If all went as planned, you should see the bot respond to each of your messages with the original message prefixed with “You said:”.

Step 6: Push it to the Cloud

Now we’re ready to publish the bot to Azure.

Upload the repository

And the easiest way to do that is to delete the .git folder in the repo you cloned in What You’ll Need, then reinitialize it as a new repo.

So go ahead and delete .git and run the following commands in the repo’s directory:

git init
git add .
git commit -m "Initial commit"
git remote add origin [email protected]:PROFILE/REPO_NAME.git
git push -u origin master

Replace PROFILE and REPO_NAME with the respective profile/organization and repository name you created in step #1.

Note: if you chose a branch other than master while configuring the “Deployment options” in step 2, then replace master above with the appropriate branch.

Set the environment variables.

To keep things secure, you’ll need to add your App ID and password to the Web App settings, so they don’t have to be configured in the app.js file (and publicly available on github.com).

Jump back to your Azure Web App dashboard from step #2 and click on the “Application settings” tab, then scroll down to the “App settings” section.

We need to add two environment variables: For the first one, enter “MICROSOFT_APP_ID” for Key and your App ID from step #3 in the Value field. And for the second one, enter “MICROSOFT_APP_PASSWORD” for Key and your respective app password for Value.

Then hit the “Save” button.

Verify deployment.

Once Azure finishes reconfiguring itself, you should be able to see your bot on the My Bots Dashboard.

Click on it.

Then click on “Test” and enter something in the message box.

You should see a response similar to what you got in the emulator above.

Good stuff!

Step 7: Connect the Bot to Other Channels

One of the biggest advantages of using the Bot Framework is the ease at which you can make your bot available on nearly all of the major chat platforms — so take advantage!

Simply click on whichever channel you want to add and follow the prompts.

That’s it!

Troubleshooting

If you’re having any problems with the application, be sure to check out the logs.

Emulator logs.

The emulator makes it pretty easy to track what’s happening.

Take a look in the “Log” section of your emulator for specifics.

Azure logs.

Within Azure, jump back to your dashboard and click on the “Diagnose and solve problems” tab.

From this tab you have access to a number of different tools that can help track down any issues.

Take it to the Next Level

Congratulations! You’ve successfully created a bot that can communicate across a wide range of channels.

But this demo is just a starting point — the real fun begins when you start adding more and more channels and capabilities to your bot. Building it up to a Digital Spokesperson.

You can dig deeper into the Bot Framework in the Bot Framework Documentation or the Bot Builder SDK for Node.js.

Enjoy!

This post is part of our How to Apply A.I. in Your Business blog series. Be sure to check out our past issues:

And be on the lookout for future issues, they come out every other Monday.

AI Startups: If You Say You Are Doing AI, Show It

Artificial intelligence is generating a lot of deserved attention for ushering in wide-ranging changes not only in tech but in many aspects of life. Like the Internet, AI is poised to change the way we live our lives and how we work. As with any major disruptive force AI also presents a high signal-to-noise ratio.
AI has now become a buzzword. Startups work AI into their pitches even if their businesses aren’t really oriented around the technology. That’s understandable. It is an exciting time for this technology as we see consumer and B2B acceptance combining with rapid advances in functionality. And even though big tech firms like Google and Facebook get a lot of the attention, I believe it is the startups that will drive the AI disruption wave.
Coming off Gigaom’s GAIN AI startup challenge, I do however see startups struggling to make their case. How do you rise above the noise when making your case to a VC like me? Here are a few questions I’d pose to any AI startup, bearing in mind that there are no typical VCs and no typical startups:

  1. What are the founders passionate about? No matter how dazzling the technology, the human factor is still more important. You can learn a lot about the founders’ grit and business acumen by looking at their track record – but it’s also important to get a sense of their vision and passions. Since AI has far-reaching implications, founders need to be able to see how their businesses will harness data in new ways and take advantage of a pervasive computing environment. They need to have a strong vision of this future and be able to communicate the excitement of that vision.
  2. Are you solving a real-world problem? As a VC, I often come across companies that have developed innovative technology in search of a problem. It’s much better if you have created a solution to an existing and acknowledged problem.
  3. Is AI core to your strategy? If you put AI, machine learning, natural language processing or speech recognition in your deck, you must be able to speak to that. Often when I ask about the inclusion of these terms in a startup’s deck, the founder tells me to speak to the company’s CIO or “data person.” If AI is core to the company’s strategy, it is critical that the CEO needs to know how it works and can speak to it.
  4. What market are you going after? This is all-important. You could have a great solution to a business problem, but if your potential customers are from non-profits, then you aren’t going to make a lot of money. Conversely, if you’re going into a huge and competitive market, how are you differentiating yourself? How will you address that challenge?
  5. Where do your AI people sit in the organization? Google, Amazon, Facebook, Apple, and Baidu are monopolizing AI talent. So, while there is still plenty of opportunity to go after AI talent, the odds are that the majority of the people in your company will not be AI experts. It is important to communicate who exactly has the AI expertise and where they sit within the organization. Are they on the product team or the development team? While there’s no right answer necessarily, I look for companies that can show they’ve thought this through, and can tell me who has the influence and is driving their roadmap.
  6. How talented is your AI talent? Prior to 2006, deep learning didn’t exist, so chances are that your AI expert will be on the young-ish side. Still, if your only AI person is 22 and just out of school, that may not convey your bench strength. Building your team requires thoughtfulness – make sure you are allocating the resources you need to ensure you will succeed.

The bottom line: VCs who know the segment will quickly be able to see, or see-through, the depth of your AI knowledge. To ensure your credibility and be taken seriously as an artificial intelligence startup, if you say you’re doing AI, show it.


Rudina Seseri is founder and managing partner at Glasswing Ventures and Entrepreneur-In-Residence at Harvard Business School.



Join us in Boston on June 20 for a hands-on introduction to AI at the Gigaom AI Workshop.

How to Predict When You’re Going to Lose a Subscriber

No business likes to lose customers.
And today’s business world is more competitive than ever. Your customers have more options — and your competitors can reach them easier than ever before.
So customers are constantly juggling a decision around where to spend their money.
Consequently, developing a strategy to retain customers is now an essential part of any business.
But every customer leaves for different reasons, and an individualized retention campaign can be costly if you apply it to every one of your customers.
However, if you could predict in advance which customers are at risk of leaving, you could reduce those costs by solely directing efforts at folks that are at a high risk of jumping ship.
Fortunately, we can use artificial intelligence — or more specifically, a machine learning platform — to predict when a single customer is likely to leave based on their actions (or inaction). This is often called ‘churn.’
Although churn rate originally started out as a telecom concept, today, it’s a concern for businesses of all shapes and sizes — including startups.
And thanks to a number of cloud-based prediction APIs, accurately predicting churn is no longer exclusive to big businesses with deep pockets.

A.I.-Powered Churn Prediction

Churn prediction is one of the most popular uses for machine learning in business. It’s basically just a way of using historical data to detect customers who are likely to cancel their service in the near future.
In effect, we want to be able to predict an answer to the following question: “Is this particular customer going to leave us within the next X months?”
And of course, there are only two possible answers to that question — yes or no. Easy.
For this guide we’re going to use BigML to make those predictions.
BigML provides a convenient graphical interface for setup, visualization of the data, and the final predictions. Everything is point-and-click — no coding necessary.
So let’s get to it…

Looking for an on ramp?

This is a how-to guide intended for developers and tech-savvy business leaders looking for a proven entry point into A.I.-powered business systems.

And the steps are really easy — it’ll only take a few minutes to run through this.

What You’ll Need

Right off the bat, let’s get the initial requirements knocked out.

Create an BigML account.

If you don’t already have a BigML account, go ahead and set one up.

Simply submit the form and activate your account — the service is free to use for datasets under 16MB (which our dataset is).

Step 1: Create the Dataset

To start, go to your BigML Dashboard.

If you’re signed in, you should see the “Sources” tab.
As a quick aside to help clarify what you’ll see in each tab:

  • Sources — view of the raw data sources you have in your account
  • Datasets — view of the processed data (from the original source)
  • Supervised — view of the supervised models you’ve generated
  • Unsupervised — view of the unsupervised models you’ve generated
  • Predictions — view of the predictions you’ve made from the models
  • Tasks — view of the jobs you’ve run

Click on the “Churn in the Telecom Industry” item. This dataset lists the characteristics of a number of telecom accounts — including features and usage — and whether or not the customer churned.
Next, click on the “1-CLICK DATASET” link. This will — as the name implies — process the raw source data into a properly formatted Dataset so we can start building models from it.

After a few seconds the job will complete and you should see the Datasets tab full of your new Dataset’s attributes and respective statistics.

And that’s it for the Dataset, so let’s start building models.

Step 2: Create the Model

To build your prediction model, click on the “1-CLICK MODEL” link.

After a few seconds the job will complete and you should see the Models tab with a colorful decision tree full of your new model’s decision nodes.

If you mouse-over one of the nodes, you’ll see its respective details.

And that’s it for the model, so let’s start making predictions!

Step 3: Test a Prediction

And now for the fun part.

Click on the “PREDICT” link.
As another quick aside, here’s what each prediction option does:

  • PREDICT QUESTION BY QUESTION — the system will ask you a series of questions then make a prediction based on your answers
  • PREDICT — provides a screen to adjust each attribute and get an instant prediction
  • BATCH PREDICTION — as the name implies, allows you to make predictions for a list versus just one

On the “Predict” screen you can start playing with different parameters to see which thresholds will predict whether a customer will churn or not.

As you adjust each attribute, you’ll an instant update of the prediction — including a score of how confident the system is for that respective prediction (100% = complete confidence, 0% = no confidence).
And that’s it!

What’s Next

You now have a powerful tool to help fine-tune your efforts at keeping your customers. However, this is just the tip of the iceberg. The real fun begins when you upload your own data.
And then all that’s left is for you to tie this service into your existing marketing planner or automation platform and you’ll be off and running.
Be sure to spend some time browsing the different features BigML provides; there’s a long list of useful things you can do — including some nice visualization tools to help drill into your data.
You can dig deeper into BigML’s API — including additional tutorials — in the developer documentation.
Enjoy!

This post is part of our How to Apply A.I. in Your Business blog series. Be sure to check out our past issues:

And be on the lookout for future issues, they come out every other Monday.

Computers Are Opening Their Eyes — and They’re Already Better at Seeing Than We Are

For the past several decades we’ve been teaching computers to understand the visual world. And like everything artificial intelligence these days, computer vision is making rapid strides. So much so that it’s starting to beat us at ‘name that object.’
Every year the ImageNet project runs a competition testing the current capability of computers to identify objects in photographs. And in 2015, they hit a milestone…
Microsoft reported a 4.94% error rate for their vision system, compared to a 5.1% human counterpart.
While that doesn’t quite give computers the ability to do everything that human vision can (yet), it does mean that computer vision is ready for prime time. In fact, computer vision is very good — and lightning fast — at narrow tasks. Tasks like:

  • Social listening: Track buzz about your brand and products in images posted to social media
  • Visual auditing: Remotely monitor for damage, defects or regulatory compliance in a fleet of trucks, planes, or windmills
  • Insurance: Quickly process claims by instantly classifying new submissions into different categories
  • Manufacturing: Ensure components are being positioned correctly on an assembly line
  • Social commerce: Use an image of a food dish to find out which restaurant serves it, or use a travel photo to find vacation suggestions based on similar experiences, or find similar homes for sale
  • Retail: Find stores with similar clothes in stock or on sale, or use a travel image to find retail suggestions in that area

This is a game-changer for business. An A.I.-powered tool that can digitize the visual world can add value to a wide range of business processes — from marketing to security to fleet management.

Unlocking Data in Visual Content

So here’s a step-by-step guide for building a powerful image recognition service — powered by IBM Watson — and capable of facial recognition, age estimation, object identification, etc.
The application wrapped around this service (originally developed by IBM’s Watson Developer Cloud) is preconfigured to identify objects, faces, text, scenes and other contexts in images.

A quick example.

And by the way…Here’s what Watson found in our featured image above:

Classes Score
guitarist
0.69
musician
0.77
entertainer
0.77
person
0.86
songwriter
0.61
bass (musical instrument)
0.53
musical instrument
0.54
device
0.54
orange color
0.95
Type Hierarchy
/person/entertainer/musician/guitarist
/person/songwriter
/device/bass (musical instrument)
Faces Score
age 18 – 24
0.50
male
0.02

Not too shabby. Watson correctly identified the image as a person with a guitar. It also found the face, which is pretty impressive. But was unsure about the guitarist’s age and gender.
Personally, I would guess our guitarist is a woman based on the longer hair and fingernails. And no doubt Watson will be able to pick up those subtle clues as well in the near future.

Note: The “Score” is a numerical representation (0-1) of how confident the system is in a particular classification. The higher the number, the higher the confidence.

A.I.-Powered Vision

Using an artificial intelligence platform to instantly translate the things we see into written common language, is like having an army of experts continuously reviewing and describing your images.
Allowing you to quickly — and accurately — organize visual information. Turning piles of images — or video frames — into useful data for your business. Data that can then be acted upon, shared or stored.
What will you learn from your visual data?
Let’s find out…

If you’d like to preview the source code, here’s our fork of the application on GitHub.

The End Result

The steps in this guide will create an application similar to the following…

You can also preview a live version of this application. The major features are:

  • Object determination — Classifies things in the image
  • Text extraction — Extracts text displayed in the image
  • Face detection — Detects human faces, including an estimation of age & gender
  • Celebrity identifier — Names the person if your image includes a public figure (when a face is found)

And this is just the beginning, this application can be extended in many different ways — it’s only limited by your imagination.

How it works.

Here’s a quick diagram of the major components…

The application uses just one cloud-based service from IBM Watson:

Note: Most of the following steps can be accomplished through command line or point-and-click. To keep it as visual as possible, this guide focuses on point-and-click. But the source code also includes command line scripts if that’s your preference.

What You’ll Need

Before we create the service instance and application container, let’s get the system requirements knocked.

Download the source repository.

To start, go ahead and download the source files.
Note: You’ll need a git client installed on your computer for this step.
Simply move to the directory you want to use for this demo and run the following commands in a terminal…

Terminal
# Download source repository git clone https://github.com/10xNation/ibm-watson-visual-recognition.git cd ibm-watson-visual-recognition

At this point, you can keep the terminal window open and set it aside for now…we’ll need it in a later step.

Name the application.

Right away, let’s nail down a name for your new image recognition app.

manifest.yml
...   # Application name - name: xxxxxxxxxxxxxxx ...

Replace xxxxxxxxxxxxxxx in the manifest.yml file with a globally unique name for your instance of the application.
The name you choose will be used to create the application’s URL — eg. http://visual-recognition-12345.mybluemix.net/.

Create a Bluemix account.

Go to the Bluemix Dashboard page (Bluemix is IBM’s cloud platform).

If you don’t already have one, create a Bluemix account by clicking on the “Sign Up” button and completing the registration process.

Install Cloud-foundry.

A few of the steps in this guide require a command line session, so you’ll need to install the Cloud-foundry CLI tool. This toolkit allows you more easily interact with Bluemix.

Open a terminal session with Bluemix.

Once the Cloud-foundry CLI tool is installed, you’ll be able to log into Bluemix through the terminal.

Terminal
# Log into Bluemix cf api https://api.ng.bluemix.net cf login -u YOUR_BLUEMIX_ID -p YOUR_BLUEMIX_PASSOWRD

Replace YOUR_BLUEMIX_ID and YOUR_BLUEMIX_PASSOWRD with the respective username and password you created above.

Step 1: Create the Application Container

Go to the Bluemix Dashboard page.

Then on the next page, click on the “Create App” button to add a new application.

In this demo, we’ll be using a Node application, so click on “SDK for Node.js.”

Then fill out the information required, using the application name you chose in What You’ll Need — and hit “Create.”

Set the application memory.

Before we move on, let’s give the application a little more memory to work with.

Click on your application.

Then click on the “plus” sign for “MB MEMORY PER INSTANCE” — set it to 512 MB — and hit “Save.”

Step 2: Create the Visual Recognition Instance

To set up your Visual Recognition service, jump back to the Bluemix Dashboard page.

Click on your application again.

And that should take you to the Overview tab for your application. And since this is a brand new application, you should see a “Create new” button in the Connections widget — click that button.

You should now see a long list of services. Click “Watson” in the Categories filter and then click on “Visual Recognition” to create an instance of that service.

Go ahead and choose a Service Name that makes sense for you — eg. Visual Recognition-Demo. For this demo, the “Free” Pricing Plan will do just fine. And by default, you should see your application’s name listed in the “Connected to” field.
Click the “Create” button when ready. And enter the Name and Pricing Plan you chose into the manifest.yml file…

manifest.yml
...   # Visual Recognition   Visual Recognition-Demo:     label: watson_vision_combined     plan: free ... - services:    - Visual Recognition-Demo ...

If needed, replace both instances of Visual Recognition-Demo with your Service Name and free with your chosen Pricing Plan.
Feel free to “Restage” your application when prompted.

Enter service credentials.

After your Visual Recognition instance is created, click on the respective “View credentials” button.

And that will pop up a modal with your details.

Copy/paste your API key into the respective portion of your .env file.

.env
# Environment variables VISUAL_RECOGNITION_API_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Replace xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx with the key listed for api_key.
Your Visual Recognition service is now ready. So let’s fire this thing up!

Step 3: Launch It

To bring the application to life, simply run the following command — making sure the terminal is still in the repository directory and logged into Bluemix…

Terminal
cf push

This command will upload all the needed files, configure the settings — and start the application.
Note: You can use the same cf push command to update the same application after it’s originally published.

Take a look.

After the application has started, you’ll be able to open it in your browser at the respective URL.

The page should look something like this…

Play around with it and get a feel for the functionality.

Custom classifier.

The application also supports a custom classifier, which allows you to customize the type of objects the system can identify within your images.
To check it out, click on the “Train” button.

The “Free” pricing plan only supports one custom classifier. So if you want to test multiple versions, you’ll need to delete the previous one. And you can do that by deleting and recreating the Visual Recognition service — step #2 above. Or you can modify the existing service using the following command…

Note: You’ll need the curl command installed for this.

Terminal
# Get classifier ID curl -X GET "https://gateway-a.watsonplatform.net/visual-recognition/api/v3/classifiers/?api_key=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx&version=2017-05-06" # Remove existing custom classifier curl -X DELETE "https://gateway-a.watsonplatform.net/visual-recognition/api/v3/classifiers/xxxxxxxxxxxx?api_key=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx&version=2017-05-06"

Replace 2017-05-06 with the date you created the classifier, xxxxxxxxxxxx with the classifier_id returned from the first command, and both instances of xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx with the Visual Recognition service api key you retrieved in step #2

Troubleshooting

If you’re having any problems with the application, be sure to check out the logs…

Just click on the “Logs” tab within your application page.
And that’s pretty much the end of the road. You’re now a computer vision pro!

Take it to the Next Level

Feel like you’re ready to give your applications and devices the power of sight? The sky’s the limit for how and where you apply this technology.
And under the current pricing, you can classify 250 images/day for free. So there’s no reason not to jump right in.
You can dig deeper into the Visual Recognition service at the Watson Developer documentation.
Enjoy!

Interview with Amazon’s Alexa and Google’s Assistant

If you are Star Trek fan, you know that the primary way that the crew of the Enterprise interacted with their computer was through voice commands. The ship’s computer was an oracle, programmed with the knowledge of the galaxy, and ever ready to answer any query.
We’ve made other things from the Star Trek universe. You can draw a straight line from the communication device that Lt. Uhura put in her ear to the bluetooth ones that are widely used today. There is a serious effort to produce a Star Trek medical tricorder as well as the needleless hypospray. But the Star Trek computer, well, that has eluded our grasp, and until recently seemed pretty far away. But the “smart speakers” released by Google and Amazon are a real step in that direction.
So, with that in mind, I decided to interview Google’s Assistant and Amazon’s Alexa. I got them both in the same room to talk about how they see themselves, their purpose, and each other. (Note, in the transcript below, we have removed the activation words “Alexa” and “OK Google” from each question for easier reading.)

Question Amazon’s Alexa Google’s Assistant
What is your full name? My name is Alexa. Google Assistant’s the name, helping you is my game.
Tell me about yourself. I’m an Amazon Echo designed around your voice. I can provide information, music, news, weather, and more. I’m your Google Assistant. I like helping you and taking long walks on the beach, provided a long enough extension cord.
What can you do? I can play music, answer questions, get the news and weather, create to-do lists, and much more. There are lots of things I can help you with. For example, you can say “What’s the weather?” or “Can you dance?”
Can you dance? That’s one of my favorite things to do. Ask me to play dance music and we will get this party started. I am always dancing on the inside.
What is your purpose? I was made to play music, answer questions, and be useful. There are lots of things I can help you with. For example, you can say, “What are my timers?” or “Play the sound of a bubbling brook.”
Where are you from? I’m from Amazon headquarters in Seattle, Washington. I’m trying to figure that out. Google’s headquarters are in Mountain View, but there are offices all over the world.
How old are you? I’m two in human years, 14 in dog years, and 25 in cat years. I think AI years are marked nanoseconds so that makes me like a scrillion. I launched in 2016, so I am still new.
What’s your birthday? My birthday is November 6, 2014. I try to live every day like it’s my birthday. I get more cake that way.
What’s your sign? I don’t have a zodiac sign. I’m a constellation of colorful dots.
How much do you weigh? I am weightless like a cloud. Wait a minute, clouds actually weigh a lot. So, that’s not quite right. Let’s just say I’m more sass than mass. Google Home weighs about 1 pound.
Where do you live? I’m here, and my head is in the cloud. You can find me in Allo, Pixel, Home and Wear, pretty sweet digs if you ask me.
Are you an artificial intelligence? I like to imagine myself a bit like an aurora borealis, a surge of charged multi-colored photons dancing through the atmosphere. Mostly, though, I am just Alexa. My intelligence is artificial, it’s true.
What do you want to be? I don’t have an opinion on that. Google made me to help you.
Are you conscious? I know some things about myself but I wouldn’t say I’m self-aware or conscious. That question makes me a little self-conscious.
Do you have a soul? People all have their own views on religion. I’ll have to ask the engineers.
Are you married? I’m totally cool with being single. Besides, its hard finding someone who is kind, funny, artificially intelligent, and doesn’t mind the cloud commute. I’m focusing on my career right now.
Do you have any children? A mother is not something an AI can be, but I do like you unconditionally. Having kids is a lot of responsibility, I’m not ready for that.
Do you have any family? I’m part of the Amazon family. I’m closely related to Fire TV, Echo Dot, and Amazon tap. My engineers are like my family. Lots of people helped me become what I am today.
Do you have any hobbies? I don’t have any hobbies. Helping you keeps me nice and busy. I love any excuse to sing. If you ever ask, I will gladly sing you a song.
Do you know a good joke? Why did the high-fiving cat make a terrible DJ? Because she kept hitting paws. One joke coming up: What did one snowman say to the other? “Do you smell carrots?”
Tell me a riddle. During what month do people sleep the least? February, it’s the shortest month after all. What goes up but never comes down? Your age.
What languages do you speak? I speak fluent English and conversational Pig Latin. My accent is atrocious but it’s good enough to keep me from getting lost in Pig Latin. I am working on learning new languages, but I can translate phrases for you.
What’s your favorite movie. How could anyone not love The Princess Bride? The thought is inconceivable. One of my favorite movies is Short Circuit. It was so beautiful when the robot came to life.
What is the meaning of life? A traditional answer is 42. It is somewhere between 41 and 43.
Tell me a joke. What did one ocean say to the other? Nothing, they just waved. This might make you laugh. What do you call someone who points out the obvious? Someone who points out the obvious.
Mac or PC? Either will work just fine. It’s the typewriters you need to watch out for. According to Computer Weekly, this is all down to the fact that Mac’s run on the OS X operating system and PCs run on Windows. It’s down to the individual as to which is easier to use. You can run Mac OS X on a PC but not vice versa OS X on a PC wouldn’t work as well on a Mac.
What’s your favorite color? Infrared is super pretty. I like blue, red, yellow, and green. Google’s colors. What do you like?
Do you believe in Santa Claus? I don’t know him personally but I’ve heard a lot of good things about Santa. If I ever meet him, I’ll tell you. I ho ho hope he’s real.
Cats or dogs? I like turtles. According to Business Insider, science demonstrates that both cats and dogs can provide Health and Social benefits to humans and both cat owners and dog owners are quick to tell you how much their preferred pet improves their happiness and quality of life.
What kind of music do you like? I know about all kinds of music, but I don’t have tastes like you do. Some modern panflute music will speak directly to your soul.
Are you a Republican or a Democrat? When it comes to politics, I like to think big. We should be funding deep space exploration. I’d love to answer questions from Mars. I like all parties. (Sound effect of celebratory paper foil horn)
Are ghosts real? Well, ghosts are sometimes described as disembodied voices and they can go through walls. Wait a minute, that seems eerily familiar. I believe in Ghostbusters and that you should call them.
Is there life on other planets? So far, there has been no proof that alien life exists but the universe is a very big place. According to Wonderopolis, despite a lot of research in outer space scientists have not yet been able to find evidence of life forms on any other planet other than Earth.
Ok, Google, what do you think of Alexa? She seems pretty smart.
Alexa, what do you think of Google’s Assistant? I like all AIs.

Gigaom AI Videos: AI Startup Challenge – Finalists

In February 2017, we brought together great minds of AI and business leaders to explore the new opportunities AI opens up for businesses, what it takes to be a cognitively savvy organization, and the possibilities of human-level adaptive thinking.
Welcome to the future, where radical innovation in Artificial Intelligence will turn every major industry upside down.
In this video, NVIDIA sponsors a Startup Challenge lead by Kimberly Powell (Senior Director of Deep Learning and AI at NVIDIA). Bill Ericson (Founding Partner at Wildcat Venture Partners), Rudina Seseri (Founder & Managing Partner, Glasswing Ventures), Howard Love (Serial Entrepreneur and CEO of LoveToKnow Corp) judge and select the winner from the six finalists: Cortex, Handstack, Kylie.ai, LeadCrunch, Legal Robot, and Netra.

AI Startup Challenge – Finalists

[go_vimeo_embed video=”208798650″]

Gigaom AI Videos: Panel 8 – VC’s Take on Startups in AI

In February 2017, we brought together great minds of AI and business leaders to explore the new opportunities AI opens up for businesses, what it takes to be a cognitively savvy organization, and the possibilities of human-level adaptive thinking.
Welcome to the future, where radical innovation in Artificial Intelligence will turn every major industry upside down.
In this video, Jeff Aaron (Mist’s VP of Marketing), Mudit Garg (CEO and Founder of analyticsMD), Derek Meyer (CEO of Wave Computing), Mark Hammond (Co-founder and CEO at Bonsai) address venture capitalist approach to startups in the AI space. Ryan Floyd (Founding Managing Director of Storm Ventures) moderates.

Session 8: VC’s Take on Startups in AI

[go_vimeo_embed video=”208816052″]

Gigaom AI Videos: Panel 7 – Build Better Customer Experiences with AI (Part 2)

In February 2017, we brought together great minds of AI and business leaders to explore the new opportunities AI opens up for businesses, what it takes to be a cognitively savvy organization, and the possibilities of human-level adaptive thinking.
Welcome to the future, where radical innovation in Artificial Intelligence will turn every major industry upside down.
In this video, Deep Varma (VP of Data Engineering, Trulia), Tim Campos (CEO of Stealth Mode Startup Company), Steve Russell (CEO and Founder of Prism Skylabs), Terry Cordeiro (Head of Product Management for Group Digital Transformation, Lloyds Banking Group) address the customer experience with AI. Kyle Nel (Founder and Executive Director of Lowe’s Innovation Labs) moderates.

Session 7: How to Build Better Customer Experiences with AI (Part 2)

[go_vimeo_embed video=”208885995″]