Google pays $7B to Android devs, brings search ads to Google Play

It’s a good time to be a mobile app developer: Google says it paid out $7 billion in 2014 to those who make Android apps. The future may be brighter for some, though, because the company plans to improve app discovery in the Google Play Store though sponsored search results.

A pilot program will roll out in the coming weeks with a few advertisers who are already running [company]Google[/company] search ads for their apps:

With more than 100 billion searches every month on Google.com, we’ve seen how search ads shown next to organic search results on Google.com can significantly improve content discovery for users and advertisers, both large and small. Search ads on Google Play will enable developers to drive more awareness of their apps and provide consumers new ways to discover apps that they otherwise might have missed.

Running ads for apps is surely going to cost developers, but the idea is that they’ll recoup the costs through app sales. Google, of course, gets a cut of the app sales, as well as the app search ads, so this is another way for the search giant to boost revenues, particularly on mobile.

End users will see a small notation in app search results indicating a sponsored app; look for the little yellow “Ad” symbol under an app’s name:

google play store adsI’d expect this approach to be more appealing to devs who offer paid apps, however, even though those with free apps might be willing to splurge on ads for their software and work toward in-app purchases for revenues. But a small development shop on a limited budget can be at a huge disadvantage with these sponsored app search results: It can be outspent by larger development companies, for example, which may rank higher in Google Play Store search results.

Are there any app developers out there who have a thought on the new sponsored search results? Chime in with your thoughts in the comments because you’re in the group that’s impacted the most here.

Microsoft just made it easier to use OneDrive in mobile apps

Not to be outdone by competing services such as iCloud, Google Drive, Dropbox and the like, Microsoft is pushing OneDrive for mobile app developers. The company on Tuesday introduced a new OneDrive API that supports software on Android, iOS, Windows and the web.

OneDrive-API-launch-blog-post-banner

This means developers have a simple method to add OneDrive into their apps, allowing for application data to be stored to or synchronized with Microsoft’s OneDrive cloud service: “It only takes a few lines of code to set up a basic integration with our Android Picker/Saver SDK and our iOS DocumentPicker contract support,” the company said in its blog post. Users of such apps, then, will have another cloud storage option available to them in-app.

[company]Microsoft[/company] said it is using the OneDrive API in some of its own mobile apps, and developers will have access to the same code.

As new features are added to the API, Microsoft will share them so that everyone using the API is building on the same platform. Among the first highlighted features for people who make apps: Efficient management of the change of files or folders on OneDrive through an app; resumable file uploads up to 10GB in size; and custom file thumbnail images.

Microsoft’s Rooms doesn’t make the cross-platform cut

As Microsoft readies a more universal Windows 10 for computers, tablets and phones, some pruning seems to be in order. One of the unique Windows Phone features getting dropped, according to All About Windows Phone, is Rooms, the app that gave families and friends a private place for chats, photo sharing and calendar coordination.

microsoft rooms

Microsoft’s support page for Rooms explains that the door will be shut once you upgrade your handset to Windows 10 and that official support for Rooms is ending March 15.

I can understand paring back on functions that probably didn’t have a high return; after all, Rooms doesn’t provide much value if one person the family uses a Windows Phone and everyone else uses an [company]Apple[/company] iPhone or an Android handset. Given Microsoft’s big push to get its apps and services on competing mobile platforms, though, I wish it had given Rooms a chance on iOS and Android.

Sure, there are plenty of other sharing and collaboration apps that replicate the core functions of Rooms. You could use SMS or any number of messaging apps, for example, for group texts and such. Shared calendars work well to coordinate schedules as well; everyone in my family has their own [company]Google[/company] calendar but we share access to keep track of who is going where, when. And the same goes for photos: [company]Facebook[/company], Instagram and other services work fine for those.

The thing is: All of these options are disparate and filled with information from so many other people. Rooms cut through all of that extraneous noise when you wanted to focus solely on what just a few very important people are doing by inviting them to a room.

Again, I see why Microsoft is letting Rooms go; I just wonder how much effort it would have taken to bring the unique, useful app to other platforms. Clearly, it was too much in Microsoft’s eyes.

PhotoTime is a deep learning application for the rest of us

A Sunnyvale, California, startup called Orbeus has developed what could be the best application yet for letting everyday consumers benefit from advances in deep learning. It’s called PhotoTime and, yes, it’s yet another photo-tagging app. But it looks really promising and, more importantly, it isn’t focused on business uses like so many other recent deep-learning-based services, nor has it been acquired and dissolved into Dropbox or Twitter or Pinterest or Yahoo.

Deep learning, to anyone unfamiliar with the term, is essentially a term for a class of artificial intelligence algorithms that excel at learning the latent features of the data they analyze. The more data that deep learning systems have to train on, the better they perform. The field has made big strides in recent years, largely with regard to machine-perception workloads such as computer vision, speech recognition and language understanding.

(If you want to get a crash course in what deep learning is and why web companies are investing billion of dollars into it, come to Structure Data in March and watch my interview with Rob Fergus of Facebook Artificial Intelligence Research, as well as several other sessions.)

The Orbeus team. L to R: TK, Yi Li, Wei Xia and Meng Wang.

The Orbeus team. L to R: Yuxin Wu, Yi Li, Wei Xia and Meng Wang.

I am admittedly late to the game in writing about PhotoTime (it was released in November) because, well, I don’t often write about mobile apps. The people who follow this space for a living, though, also seemed impressed with it when they reviewed it back then. Orbeus, the company behind PhotoTime, launched in 2012 and its first product is a computer vision API called ReKognition. According to CEO Yi Li, it has already raised nearly $5 million in venture capital.

But I ran into the Orbeus team at a recent deep learning conference and was impressed with what they were demonstrating. As an app for tagging and searching photos, it appears very rich. It tags smartphone photos using dozens of different categories, including place, date, object and scene. It also recognizes faces — either by connecting to your social networks and matching contacts with people in the photos, or by building collections of photos including the same face and letting users label them manually.

You might search your smartphone, for example, for pictures of flowers you snapped in San Diego, or for pictures of John Smith at a wedding in Las Vegas in October 2013. I can’t vouch for its accuracy personally because the PhotoTime app for Android isn’t yet available, but I’ll give it the benefit of the doubt.

phototime

More impressive than the tagging features, though — and the thing that could really set it apart from other deep-learning-powered photo-tagging applications, including well-heeled ones such as Google+, Facebook and Flickr — is that PhotoTime actually indexes the album locally on users’ phones. Images are sent to the cloud, ran through Orbeus’s deep learning models, and then the metadata is sent back to your phone so you can search existing photos even without a network connection.

The company does have a fair amount of experience in the deep learning field, with several members, including research scientist Wei Xia, winning a couple categories at last year’s ImageNet object-recognition competition as part of a team from the National University of Singapore. Xia told me that while PhotoTime’s application servers run largely on Amazon Web Services, the company’s deep learning system resides on a homemade, liquid-cooled GPU cluster in the company’s headquarters.

Here’s what that looks like.

The Orbeus GPU cluster.

The Orbeus GPU cluster.

As I’ve written before, though, tagging photos is only part of the ideal photo-app experience, and there’s still work to do there no matter how nice the product functions. I’m still waiting for some photo application to perfect the curated photo album, something Disney Research is working on using another machine learning approach.

And while accuracy continues to improve for recognizing objects and faces, researchers are already hard at work applying deep learning to everything from recognizing the positions of our bodies to the sentiment implied by our photos.

Braintree’s bitcoin API is now available to merchants in beta

PayPal’s developer arm Braintree has finished its initial integration work with Coinbase, and is opening up a beta program to its online merchants and mobile developers who want to start accepting bitcoin payments. No word yet on specific companies that have signed up for the beta, but Braintree has an impressive customer base of e-commerce and mobile app startups, including Airbnb and Uber. PayPal has made bitcoin an option for certain transactions to its various sellers, but, as my colleague Biz Carson points out, the payments giant is moving cautiously when it comes to cryptocurrency.

NYC considers Bitcoin, Apple Pay as options to pay parking fines

You can buy an increasing number of goods of services with cryptocurrency Bitcoin and Apple’s new mobile payments platform Pay, but soon their reach could expand to fines as well – specifically parking tickets. New York City is contemplating a mobile service that will let parking scofflaws pay their tickets right where they are issued using a number of different mobile payment methods, according to MarketWatch.

In a city where it’s notoriously difficult to park legally, NYC issues as many as 10 million parking tickets each year, many of which rack up further fines after they go unpaid for 30 days. But the city is hoping that by making it easier for busy New Yorkers to pay their fines will lead to more timely collection of that ticket revenue, MarketWatch reported.

The city is weighing creating a mobile app that would let you snap a picture of your ticket from the phone or scan a barcode and then produce a payment interface that could include [company]PayPal[/company] as well as [company]Apple[/company] Pay and Bitcoin. While there are no set plans or timeline in place to introduce that app, MarketWatch said, the city has issued a request for information (RFI) to gather more data on how such an instant payment system would work.