Session Name: Fail Whale To Investing.
Speakers: S1 Announcer
S2 Om Malik S3 Michael Abbott
All right, coming up next we’re going to have Fail Whale to Investing. Emerging application patterns, moderated by my boss, Om Malik, and he’s going to be talking with Michael Abbott, he’s a partner with Clander, Perkins, Claufield and Byers. Please welcome Om and Michael to the stage.
OM MALIK 00:41
Welcome Mike. Thank you for joining us. Last time I interviewed you, you were handling the infrastructure for Twitter and making it all work. Now you’re at Clander Perkins and doing the investing. So, tell us about how those two things are different.
MICHAEL A 01:01
Wow. Very, very different. I think the first thing that comes to mind is that when you’re in an operating role, regardless if you’re starting a company or leading a team or shipping a product, you get feedback really quickly; if something is working or not working. One of the challenges on the investment side is the feedback cycles are much longer. So, I’ve been at Venture Capital now for a year. I had the opportunity to invest in a couple great early stage companies, but those are very early companies and it will take some time to see how those will actually play out. So, I think that feedback cycle is very different for me, having both started companies and led large teams and scaled teams.
OM MALIK 01:44
So, as a real time data guy, which is what Twitter was, how do you deal with the fact that we live in a world where half of the world is synchronized and half of the world isn’t, especially the venture half of it.
MICHAEL A 02:00
Yeah, well I think when you take a service like Twitter, it’s remarkable on many levels, but you have very high rate volumes and high read volumes and as you point out, it feels very synchronized, but behind the scenes there actually is a lot of un-synchronized queuing going on. That’s really important to deal with different events if all of a sudden Bin Laden is killed. How do you actually handle that type of load from an infrastructure stand point and if everything was indeed synchronized, that would be very, very difficult to handle. So, part of this is around architecting systems that can degrade gracefully or scale out well as you get those flash crowds, if you will.
OM MALIK 02:46
So, there is the one thing which was in the past, we would see this thing that was called the slash dot effect and companies were trying to prepare for that kind of scaling. In the data world, there is no such thing as a slash dot. In fact, as you said, Bin Laden is killed and you don’t know how many people are going to tweet and how many times things are going to get re-tweeted, or Oscars. What kind of capacity planning, or what kind of things do you need to do today versus what you used to do back in the day from your perspective? And how does the emergent of massive data streams impact the whole idea of capacity and infrastructure?
MICHAEL A 03:33
It’s certainly a very, very difficult problem and as you point out, 10 years ago, you would sit down with an Excel spreadsheet and build some models and go to your CFO or VP of Finance and explain why we needed to go invest X number of dollars for some single large scale piece of hardware. In today’s world, many of these services obviously require a high degree of elasticity with how do you deal with the high rate volumes? How do you deal with the high read volumes. Again, different services have different patterns there. So, it’s difficult to generalize what the single answer for that actually is, but from a capacity plan perspective, it was interesting because I recruited a number of folks from Google to Twitter to just focus on that problem, because there’s a lot of interest because they were saying, “This is a new environment, how do I actually do capacity planning in this world where we have these non-predictable events that causes huge demand?” A lot of the plumbing underneath though is around, as I pointed out before, how do you either degrade slightly so the users don’t feel like there’s actually degrading, but you’re just fanning out more to larger numbers of queues, but from a user experience, you don’t feel like you’re losing the experience. So, I think part of what’s going on here is, from a user perspective, you want it to feel real time, behind the scenes you want to put as many synchronists or decompose your infrastructure into as many synchronized queues if you can or ensure that you can handle these loads and you don’t lose tweets or data.
OM MALIK 05:19
So, what is the idea of real time? What seems like real time to human eye or human brain?
MICHAEL A 05:32
OM MALIK 06:58
Are there any key lessons for app developers from Twitter? Especially people who are building what we like to call the big data apps, whether it be the enterprise, or even for the consumer level products?
MICHAEL A 07:14
I think it really relates to big data. I was thinking about this before when we were talking about this. Certainly when you have services like Twitter, there’s a lot of data that’s being collected on many different dimensions, whether it be system data, user data, and one of the things that we invested in pretty heavily is an experimentation framework around how can we test out a set of models that we built on the machine learning side to frankly target promoted tweets. And the reason why this was really important for us to go do was, we wanted to ensure that as we rolled out, promoted tweets into peoples’ time lines, we didn’t want that experience to feel jarring. So, the goal was could we actually target promoted tweets so it felt like it was a natural part of the content that you were consuming via Twitter. In order to get there, and obviously this is still continuing on today, we invested really heavily in, how can we do as many small experiments with that big data, and in this case the application was inserting in an advertising content within a native string.
OM MALIK 08:22
How did the Twitter team implement all that? With very limited data, very few people used it. How did you guys use data to create that experience?
MICHAEL A 08:48
Well, from user experience, again, we have to de-couple the ‘I am reading Twitter and I happen to see a promoted tweet that is targeted to me and it happens to be about swimming because I like to swim’. Now, behind the scenes there’s obviously a lot of work on collecting data around me to build models that represent my interests, because I think a lot of what Twitter’s about is the interest graph. That interest graph can have many different dimensions and the weights on different aspects of that interest graph may be more important or less important from a commerce perspective. So, when you walk into an environment like that, you typically have a set of hypotheses that you want to go test. That’s where it comes back to the experimentation framework that I mentioned around how many experiments can you run at a particular point in time or how fast to basically figure out what’s working or not, because by nature, we as humans tend to infer patterns very quickly, even when there’ not often times patterns that exist. So, part of this small experiment is to break through that problem without any false assumptions and test out smaller pieces of data.
OM MALIK 10:00
So, when you look at your experiences with Twitter, what would be your key lessons for people who are trying to develop data informed applications, or data rich applications? What are the things you think they should be looking out for and they should be doing in the future?
MICHAEL A 10:18
Well, I think one of the key things even with this example we’re talking about with promoted tweets is, how do you tie in the collection of data to the inference of knowledge from that data that feeds directly back into that application. So, historically, 10 years ago, you wouldn’t have had a model like that because you would have collected that data in some OLTP environment, you would have ETL in some data warehouse, you might have used SAS or some tool like that to go build these models and then 5 or 6 months later, you might have then go and put that into an application and it gets hard coded. I think what we’re seeing now is more of a virtuous cycle where the output of an AB test, if you will, from that experimentation framework feeds directly back into the app. So, to your question, how do you design around that, I think the key thing is knowing that the data is flowing continually through the system. But the other point I like to make too, which I think is sometimes overlooked when we talk about big data is, how do you ensure that the quality of the data that’s going into that model generation is really good? I think you’ve used this term before, Om, around data obesity, which I steal all the time; just storing data that you don’t necessarily need and what’s the real veracity of that data that you’re storing and how do you ensure that the data that you’re using to either build a recommendation system, build a fraud detection system, or whatever you’re actually building is based on some sort of truth or veracity.
OM MALIK 12:02
So, one of the things that is often wondered, but nobody gave me an answer to is, how much does the application change as data keeps coming in? Is it aware? Is it learning from what you’re learning from the data or is it hard coded?
MICHAEL A 12:22
Well, I think every application is a little bit different, but I think many of the apps that I’m describing have different frequencies by which they update their models. So, that could be daily. It tends to not be shorter than 12 hours because you want to get enough data from that experiment to know in my AB test, did that new dimension or that new weight on that particular dimension, is that relevant or not? So, you have to have a statistically relevant set of data before you say, “Okay, this model is great, I’m going to update all the models for targeting”.
OM MALIK 13:01
Right. So, when you worked with the old school data in your earlier company and you did WebOS for Palm and now you did work for Twitter, combine all those experiences, put on your thinking hat and tell us what the future looks like for data rich applications?
MICHAEL A 13:26
I think for me, with my experience at WebOS, that was just incredible just to see the notion of different sensors collecting a lot of big data. Specifically every phone with WebOS everyday was actually sending logs back to a cloud service that we would actually go and mine for errors and bugs because the team did an amazing job by shipping WebOS in frankly, under a year after resetting it and so, we knew after shipping 1.0 software that there were going to be bugs. But, we wanted to understand what was actually going on in the field. So, we would actually mine those daily to see we were having too many panics, we need to get another update out quickly. Fortunately, we also developed an ability to overdo the updates. The other interesting end user experience that we prototyped and then shipped on with big data and in sensors was, it turns out if you just collect a location when someone disconnects from blue tooth, that can build an interesting application to help me find my car. It turns out when you’re disconnecting blue tooth, you’re getting out of your car. Now, there’s an interest around location of home. If your phone’s not moving between the hours of 10 pm and 6 am and that’s Monday through Friday or all 7 days a week around your home and you can figure out where work is. What kind of experiences can you build around that? Again, we didn’t ship many of those, but we started seeing by using the sensors on the phone what kind of new experiences or applications could be built. Fast forward to today, we’re starting to see some of those.
OM MALIK 15:12
Right. When I talk to application developers, many of them are new generation entrepreneurs and hackers, they have a great idea. They know how to use design and craft great experiences, but whenever I talk to them, they never really factor in the network limitations of AT&T or any other carrier. Similarly, the guys who talk the language of data don’t necessarily talk with the guys who are doing the front end stuff. Do you think that we need to have a different way of thinking that needs to happen for the app developers?
MICHAEL A 15:56
Yeah, I actually think you’re seeing a trend that I started seeing at Palm, actually with WebOS and I think I’m seeing this now across many of our portfolio companies and frankly outside of our portfolio companies, where really pairing that senior HI or Human Interaction Designer person with that lead engineer is so critical from the beginning so the experience can be understood by the engineer, but also that designer from a human interaction perspective can understand the limitations of the environment that they’re building. Whether it be assuming that you’re offline or the connection is slower, you want a great experience. It shouldn’t just be a great experience in one small segment. I think having that balance is really key and I think more and more companies and products are being built by just having great pairing of those 2 players.
OM MALIK 16:54
Do you see that we have enough of that cross platform understanding going?
MICHAEL A 17:02
I don’t think we do. Part of it is, I think there’s a lack of real empathy between the two groups. I think it’s improving and I think some companies do it better than others, but and again, I give kudos to Mathias actually, Eduarte who is now running HI for Android, really pioneering this for me with WebOS. He and his team, I had him pair up with the folks on the engineering team. How do you get that real understanding from an engineer, saying, “Okay, I really see the experience that you really want to go build for our users”, while at the same time in that case, the designers get to understand the challenges of shipping a native web app on a phone for the first time ever. How do you get that balance? Because you still need to ship. You have one tension where, often times on the design side it’s never good enough, and on the engineering side, he or she wants to ship.
OM MALIK 18:02
I think when you look at the apps, when you have all this data input, whether it’s sensor input or there is just data informing the apps to do interesting things, it ends up being a problem that people who develop these experiences actually don’t think about things like emotion and empathy. We had that case with Uber where they managed to piss of people every single time they do something interesting like search pricing, but without any empathy or charging more at the time of sending. Data informs you to charge more because there is fewer cards and more people asking for a card service. Makes sense to charge more. That’s what data is there for, but the data also lacks empathy. I keep saying that if we are going to build this future which is driven by data and data is driving experiences, how do we bring this empathy and humanity into the data?
MICHAEL A 19:12
I don’t know if I have the answer of how we will, but I think that you can certainly imagine that we might be forced into it by some of the reactions that we’re seeing, even to Uber. You have a very interesting article you wrote around data and you step back and you think through that and you say, “Okay, well does natural selection apply here?”, but on the same token, if you’re rating a driver, how you rate is also unique to you. So, I wonder if there’s ways that we can infer from data and really what that rating should be based on start stop of a particular trip. But, maybe your rating on that driver ends up also building a rating on you and your ability to rate. But, I think coming back to your point on empathy; I think that in general is something that is key or any product company to really have empathy. It’s something that now that I’m on the venture side, having empathy with entrepreneurs. Having started a company myself, I have very deep empathy for how hard and how lonely it is to go start a company, but I think it’s really important in this world that we’re in on the data side to really step back and think what are the implications of all these things that we’re doing from a product and services perspective.
OM MALIK 20:29
Right. I know our time is up, but I think my parting thought would be that we need less data science and more data odds and I think that would be a way of trying to bring some humanity into all of these things. Thank you for doing this.
MICHAEL A 20:45
Thanks for having me.
OM MALIK 20:45
Hopefully we’ll see you back here soon.
MICHAEL A 20:48
OM MALIK 20:48
Thank you everyone.