IFTTT launches 3 new apps to create an easy button for the web

For the last few years If This Then That (IFTTT) has been the place to go if you wanted to customize your web experience. It was a simple way to make complex interactions between web services a reality. But now, the five-year-old company has launched three new apps and changed the name of its original app to reflect its evolving goal to make those same online interactions even more simple. In the process it thinks it may have even found a piece of its monetization strategy.

The Do family of apps

On Thursday IFTTT is launching three new apps for iOS and Android that will let people push a button on their phones to make something happen. There’s the Do Button, which let’s you press a button to trigger an action that can be as simple as turning of a light or as complicated as posting your current location to your Twitter bio. My current Do button apps allow me to turn off my Hue light in my bedside table lamp, tell my team on Slack that I’m going to be away from keyboard for a while and track how many glasses of water I’ve drunk using Numerous.

Do-Button-—-Screen-—-On-transparent

In addition to the Do Button are two other Do apps: Do Camera and Do Note. Do Camera lets you link your phone’s camera to other web services so any picture you take with the recipe gets uploaded to that service. I have one that sends pictures to Dropbox, one for Twitter and one to my email. Do Note lets you create Calendar entries, send tweets, emails or other text-based notes to services. I have one for tracking gift ideas, one for logging food to an activity tracker and one that’s empty. In each app you only get three recipes. (You can search through a list of established recipes or build your own).

In playing with the Do Button I’ve found about a 3-to-4-second delay between hitting the button and the physical action happening. This may be fine for turning my thermostat down or posting to Slack, but it’s annoying for my lights. Setting up new channels is easy and even if you’ve never used the original IFTTT service you’d probably find value in it. Even my husband thought it was pretty neat.

The power of simplicity

Like Twitter, the Do family of apps are deceptively easy to use, but also incredibly versatile. I’ve long loved IFTTT because it makes the kind of links I’ve longed for on the web easier to implement (remember the pain of trying to use Yahoo Pipes?), but for many mainstream consumers it’s still too complicated. With Do, linking two services becomes almost as easy as clicking two buttons. In an interview, IFTTT CEO Linden Tibbets said he could envision a Do button being built into web platforms much like we have Twitter and Facebook buttons today.

Do Camera recipes

Do Camera recipes

I can see it, because it’s a powerful thing to be able to easily automate actions between two services you use all the time. A command as simple as “every time I take a photo upload it to Dropbox” can take several clicks to set up today on your mobile phone, and you may not actually want all your photos to automatically upload. With Do Camera you can just take a picture from the app using the Dropbox recipe and it’s automatically saved there.

When IFTTT becomes If

As part of the transition from being a one-app company to now having four, IFTTT is changing the name of its original app to If. The app itself stays the same, but after talking with Tibbets my expectation is that we’ll see a lot more work focused around the Do family. In part that’s because that’s where the mainstream users are expected to be, but it’s also where IFTTT hopes to make its money.

Mike Harris — CEO, Zonoff Linden Tibbets — CEO, IFTTT

Mike Harris — CEO, Zonoff
Linden Tibbets — CEO, IFTTT

Speaking last year at our Structure Connect Conference Tibbets explained that he planned to monetize the service by charging consumers. Tibbets didn’t offer specifics during our chat about Do, but he did offer hints. For example, the Do apps are limited to only three recipes right now, but users might pay for more, Tibbets suggested.

Meanwhile, the If app becomes the back end of the Do family and has become a lot more powerful as a platform. After having raised $30 million IFTTT has managed to open up the If app to other companies so they can build their own If channels. Previously, If developers had to take on the task themselves, making it tough to scale the company and the channels it supported very quickly. But now it has over 120 channels according to Tibbets and some of the early companies for whom IFTTT engineers built channelsare actually now being taken over and supported by the companies themselves.

With the launch of these apps, IFTTT has seen that as we spend more of our lives online we don’t have a lot of great user interfaces to help us bridge different apps or offline and online services. The app model requires too many swipes, taps and touches to let us do what we want, and the physical interaction of pressing a button is still too dumb for software. Do combines the two and eliminates as many clicks as possible. Will consumers find that compelling enough to pay for it?

When it comes to smart home security, cameras are the worst

Don’t freak out, but the products inside your smart home have some serious security flaws, according to a new report out from enterprise security research firm Synack. The company tested 16 popular devices over the holidays and determined that connected cameras were the least secure. Products ranging from the SmartThings hub to the Nest and Lyric thermostats also had some problems.

Colby Moore, a security research analyst who compiled the report, said it took him about 20 minutes to break into each of the assorted devices and he only found one — the Kidde smoke detector — that didn’t have any significant flaws. But the Kidde isn’t actually connected. Before we break down each device’s big problems, the macro picture from the report was that there are no real standards in the connected home security space, and perhaps we should come up with some.

“Right now the internet of things is like computer security was in the nineties, when everything was new and no one had any security standards or any way to monitor their devices for security,” said Moore.

The Withings Home camera

The Withings Home camera

In general Moore suggests the following as basic best practices, even though he concedes that some users won’t like them:

  • Hardwire as many devices as possible. And when devices are wireless, make sure they have push notifications to the user when they are kicked offline.
  • Firmware updates should happen automatically, especially those dealing with security flaws and vulnerabilities. Don’t wait for the user to push them through.
  • Require strong passwords. Make sure they have combinations of numbers, special characters and letters and are more than 12 characters.
  • Send all the data to the cloud using a secured connection. Don’t store it on the device, which can be hacked.
  • If you are going to use SSL, check certificates at both ends. Apparently, some devices do not.
  • Use SSL pinning so your device is authenticated, as opposed to the network the device is on.

Some of these may be controversial. For example, stronger passwords can be a pain to enter on devices with tiny screens and no keyboards. Another issue is hardwiring everything. Wireless devices are simply more convenient and wireless connectivity is often a reason people buy a certain product over another. Finally, storing all of your data in the cloud might be more secure, but it’s only as secure as your cloud vendor. If the vendor get hacked, there go your data and your camera images.

Moore concedes these points, but says that even understanding these tradeoffs would help. I agree. It’s one thing to trust my camera data to Nest or Amazon, but another to trust it to a startup that just launched three months ago (although it’s highly likely that its cloud back-end is Amazon Web Services). So what about the specific devices?

Synack looked at four classes: cameras, thermostats, smart hubs and smoke detectors. It found the most flaws in the camera class, with Dropcam being the most secure.
camerassynack

In thermostats, Nest once again was the most secure, but most were dinged for their password policies. This is understandable, because most thermostats don’t have keyboards, making it tough to enter a password on the device itself.

thermostastssynack

When it comes to smoke detectors we see Kidde, the only device that got a perfect score from a security perspective, in part because it’s not connected. Why it’s on this report, I don’t know. There’s also the first mention of a supply chain–based attack, which is worth noting, because it means that someone would have to intercept the device and change a component. This isn’t specific to just smoke detectors, but any connected product. I thought this was tenuous, but Moore pointed out that we could see more of it in the future and that it really just took a bit more long-range planning. It could also be seen more in returned or second-hand devices.

co2synack

Finally we see his results from testing home automation hubs. While the Revolv isn’t sold anymore because Nest purchased the company for the engineers, the others are on the market.

hubsynack

While this report covers the devices themselves, I’d like more insight into how we secure the future, when we start linking these devices together. I tie many services together via Works with Nest, If This Then That and many other services, and suspect others will soon do the same. And while individual devices may get more secure, once they start sharing data between clouds, that introduces new weaknesses that this report doesn’t even get into. When asked about security in the smart home today, Moore said, “Security is abysmal.”

So, let’s work on that, but let’s think about how we’re planning for tomorrow, too.

Updated: This story was updated at 3:06pm PT to clarify that the Kidde smoke detector isn’t connected.

How NASA launched its web infrastructure into the cloud

Among U.S. government agencies, the adoption of cloud computing hasn’t been moving full steam ahead, to say the least. Even though 2011 saw the Obama administration unveil the cloud-first initiative that called for government agencies to update their old legacy IT systems to the cloud, it hasn’t been the case that these agencies have made great strides in modernizing their infrastructure.

In fact, a September 2014 U.S. Government Accountability Office report on federal agencies and cloud computing explained that while several agencies boosted the amount of IT budget cash they spend on cloud services since 2012 (the GAO studied seven agencies in 2012 and followed up on them in 2014), “the overall increase was just 1 percent.” The report stated that the agencies’ small increase in cloud spending compared to their overall budget was due to the fact that they had “legacy investments in operations and maintenance” and were not going to move those over to the cloud unless they were slated to be either replaced or upgraded.

But there’s at least a few diamonds in the rough. The CIA recently found a home for its cloud on Amazon Web Services. And, in 2012, NASA contracted out with cloud service broker InfoZen for a five-year project worth $40 million to migrate and maintain NASA’s web infrastructure — including including NASA.gov — to the Amazon cloud.

This particular initiative, known as the NASA Web Enterprise Services Technology (WestPrime) contract, was singled out in July 2013 as a successful cloud-migration project in an otherwise scathing NASA Office of Inspector General audit report on NASA’s progress in moving to cloud technology.

Moving to the cloud

In August, InfoZen detailed the specifics of its project and claimed it took 22 weeks to migrate 110 NASA websites and applications to the cloud. As a result of the project’s success, the Office of Inspector General recommended that NASA departments use the WestPrime contract or a smilier contract in order to meet policy requirements and move to the cloud.

The WestPrime contract primarily deals with NASA’s web applications and doesn’t take into account high-performance computing endeavors like rocket-ship launches, explained Julie Davila, the InfoZen cloud architect and DevOps lead who helped with the migration. However, don’t let that lead you to believe that migrating NASA’s web services was a simple endeavor.

Just moving NASA’s “flagship portal” of nasa.gov, which contains roughly 150 applications and around 200,000 pages of content, took about 13 weeks to move, said Roopangi Kadakia, a Web Services Executive at NASA. And not only did NASA.gov and its related applications have to get moved, they also had to be upgraded from old technology.

NASA was previously using an out-of-support propriety content management system and used InfoZen to help move that over to a “cloudy Drupal open-source system,” she said, which helped modernize the website so it could withstand periods of heavy traffic.

“NASA.gov has been one of the top visited places in the world from a visitor perspective,” said Kadakia. When a big event like the landing of the Mars Rover occurs, NASA can experience traffic that “would match or go above CNN or other large highly traffic sites,” she said.

NASA's Rover Curiosity lands on Mars

NASA’s Rover Curiosity lands on Mars

NASA has three cable channels that the agency runs continually on its site, so it wasn’t just looking for a cloud infrastructure that’s tailored to handle only worst-case scenarios; it needed something that can keep up with the media-rich content NASA consistently streams, she said.

The space agency uses [company]Amazon[/company] Web Services to provide the backbone for its new Drupal content management system, and has worked out an interesting way to pay for the cloud, explained Kadakia. NASA’s uses a contract vehicle called Solutions for Enterprise-Wide Procurement (SEWP) that functions like a drawdown account between NASA and Amazon.

The contract vehicle takes in account that the cost of paying for cloud services can fluctuate based on needs and performance (a site might get a spike in traffic on one day and then have it drop the next day). Kadakia estimates that NASA could end up spending around $700,000 to $1 million for AWS for the year; the agency can put in $1.5 million into the account that can cover any unforeseen costs, and any money not spent can be saved.

“I think of it like my service card,” she said. “I can put 50 bucks in it. I may not use it all and I won’t lose that money.”

Updating the old

NASA also had to sift through old applications on its system that were “probably not updated from a tech perspective for seven-to-ten years,” said Kadakia. Some of the older applications’ underlying architecture and security risks weren’t properly documented, so NASA had to do an audit of these applications to “mitigate all critical vulnerabilities,” some of which its users didn’t even know about.

“They didn’t know all of the functionalities of the app,” said Kadakia. “Do we assume it works [well]? That the algorithms are working well? That was a costly part of the migration.”

After moving those apps, NASA had to define a change-management process for its applications so that each time something got altered or updated, there was documentation to help keep track of the changes.

To help with the nitty gritty details of transferring those applications to AWS and setting up new servers, NASA used the Ansible configuration-management tool, said Davila. When InfoZen came, the apps were stored in a co-located data center where they weren’t being managed well, he explained, and many server operating systems weren’t being updated, leaving them vulnerable to security threats.

Without the configuration-management tool, Davila said that it would “probably take us a few days to patch every server in the environment” using shell scripts. Now, the team can “can patch all Linux servers in, like, 15 minutes.”

NASA currently has a streamlined devops environment in which spinning up new servers is faster than before, he explained. Whereas it used to take NASA roughly one-to-two hours to load up an application stack, it now takes around ten minutes.

What about the rest of the government?

Kadakia claimed that moving to the cloud has saved NASA money, especially as the agency cleaned out its system and took a hard look at how old applications were originally set up.

The agency is also looking at optimizing its applications to fit in with the more modern approach of coupled-together application development, she explained. This could include updating or developing applications that share the same data sets, which would have previously been a burden, if not impossible, to do.

A historical photo of the quad, showing Hangar One in the back before its shell was removed. Photo courtesy of NASA.

A historical photo of the quad, showing Hangar One in the back before its shell was removed. Photo courtesy of NASA.

Larry Sweet, NASA’s CIO, has taken notice of the cloud-migration project’s success and sent a memo to the entire NASA organization urging other NASA properties to consider the WestPrime contract first if they want to move to the cloud, Kadakia said.

While it’s clear that NASA’s web services have benefited from being upgraded and moved to the cloud, it still remains hazy how other government agencies will follow suit.

David Linthicum, a senior vice president at Cloud Technology Partners and Gigaom analyst, said he believes there isn’t a sense of urgency for these agencies to covert to cloud infrastructure.

“The problem is that there has to be a political will,” said Linthicum. “I just don’t think it exists.”

Much like President Obama appointed an Ebola czar during the Ebola outbreak this fall, there should be a cloud czar who is responsible for overseeing the rejiggering of agency IT systems, he said.

“A lot of [government] IT leaders don’t really like the cloud right now,” said Linthicum. “They don’t believe it will move them in the right direction.”

Part of the problem stems from the contractors that the government is used to working with. These organizations like [company]Lockheed Martin[/company] and [company]Northrop Grumman[/company] “don’t have cloud talent” and are not particularly suited to guiding agencies looking to move to the cloud.

Still, as NASA’s web services and big sites are now part of the cloud, perhaps other agencies will begin taking notice.

Images courtesy of NASA