Voices in AI – Episode 76: A Conversation with Rudy Rucker


About this Episode

Episode 76 of Voices in AI features host Byron Reese and Rudy Rucker discuss the future of AGI, the metaphysics involved in AGI, and delve into whether the future will be for humanity’s good or ill. Rudy Rucker is a mathematician, a computer scientist, as well as being a writer of fiction and nonfiction, with awards for the first two of the books in his Ware Tetralogy series.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm, I’m Byron Reese. Today my guest is Rudy Rucker. He is a mathematician, a computer scientist and a science fiction author. He has written books of fiction and nonfiction, and he’s probably best known for his novels in the Ware Tetralogy, which consists of software, wetware, freeware and realware. The first two of those won Philip K. Dick awards. Welcome to the show, Rudy.
Rudy Rucker: It’s nice to be here Byron. This seems like a very interesting series you have and I’m glad to hold forth on my thoughts about AI.
Wonderful. I always like to start with my Rorschach question which is: What is artificial intelligence? And why is it artificial?
Well a good working definition has always been the Turing test. If you have a device or program that can convince you that it’s a person, then that’s pretty close to being intelligent.
So it has to master conversation? It can do everything else, it can paint the Mona Lisa, it could do a million other things, but if it can’t converse, it’s not AI?
No those other things are also a big part of if. You’d want it to be able to write a novel, ideally, or to develop scientific theories—to do the kinds of things that we do, in an interesting way.
Well, let me try a different tack, what do you think intelligence is?
I think intelligence is to have a sort of complex interplay with what’s happening around you. You don’t want the old cliche that the robotic voice or the screen with capital letters on it, just not even able to use contractions, “do not help me.” You want something that’s flexible and playful in intelligence. I mean even in movies when you look at the actors, you often will get a sense that this person is deeply unintelligent or this person has an interesting mind. It’s a richness of behavior, a sort of complexity that engages your imagination.
And do you think it’s artificial? Is artificial intelligence actual intelligence or is it something that can mimic intelligence and look like intelligence, but it doesn’t actually have any, there’s no one actually home?
Right, well I think the word artificial is misleading. I think as you asked me before the interview about my being friends with Stephen Wolfram, and one of Wolfram’s points has been that any natural process can embody universal computation. Once you have universal computation, it seems like in principle, you might be able to get intelligent behavior emerging even if it’s not programmed. So then, it’s not clear that there’s some bright line that separates human intelligence from the rest of the intelligence. I think when we say “artificial intelligence,” what we’re getting at is the idea that it would be something that we could bring into being, either by designing or probably more likely by evolving it in a laboratory setting.
So, on the Stephen Wolfram thread, his view is everything’s computation and that you can’t really say there’s much difference between a human brain and a hurricane, because what’s going on in there is essentially a giant clockwork running its program, and it’s all really computational equivalence, it’s all kind of the same in the end, do you ascribe to that?
Yeah I’m a convert. I wouldn’t use the word ‘clockwork’ that you use because that already slips in an assumption that a computation is in some way clunky and with gears and teeth, because we can have things—
But it’s deterministic, isn’t it?
It’s deterministic, yes, so I guess in that sense it’s like clockwork.
So Stephen believes, and you hate to paraphrase something as big as like his view on science, but he believes that everything is—not a clockwork, I won’t use that word—but everything is deterministic. But, even the most deterministic things, when you iterate them, become unpredictable, and they’re not unpredictable inherently, like from a universal standpoint. But they’re unpredictable from how finite our minds are.
They’re in practice unpredictable?
So, a lot of natural processes, like well there’s like when you take Physics I, you say oh, I can predict where, if I fire an artillery shot where it’s going to land, because it’s going to travel along a perfect parabola and then I can just work it out on the back of an envelope in a few seconds. And then when you get into reality, well they don’t actually travel on perfect parabolas, they have this odd shaped curve due to air friction, that’s not linear, it depends how fast they’re going. And then, you skip into saying “Well, I really would have to simulate this click.”
And then when you get into saying you have to predict something by simulating the process, then the event itself is simulating itself already, and in practice, the simulation is not going to run appreciably faster than just waiting for the event to unfold, and that’s the catch. We can take a natural process and it’s computational in the sense that it’s deterministic, so you think well, cool, I’ll just find out the rule it’s using and then I’ll use some math tricks and I’ll predict what it’s going to do.
For most processes, it turns out there aren’t any quick shortcuts, that’s actually all. It was worked on by Alan Turing way back when he proved that you can’t effectively get extreme speed ups of universal processes. So then we’re stuck with saying, maybe it’s deterministic, but we can’t predict it, and going slightly off on a side thread here, this question of free will always comes up, because we say well, “we’re not like deterministic processes, because nobody can predict what we do.” And the thing is if you get a really good AI program that’s running at its top level, then you’re not going to be able to predict that either. So, we kind of confuse free will with unpredictability, but actually unpredictability’s enough.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

AWS Re:Invent 2018 Reflects an Industry Coming of Age

As a first take, the 2018 AWS Re:Invent conference event held in Las Vegas seemed to be slicker, even though bigger, than previous years. While conference sessions were far too distributed across multiple hotels to feed a coherent view, the big-barn expo exuded a feeling of knowing what it was about. Even the smallest vendors had stands which went beyond the lowest-common-denominator quick-assembly cube, suggesting either (a) the organisers had put more thought into it or (b) the vendors were better-established and (therefore) had more money. All in all, it felt less of a bun fight — more space between stands, less urgency to get from one place to another.

It would be too much of an extrapolation to suggest this reflects the state of the cloud marketplace in general, and AWS in particular; however, it does serve as a useful backdrop upon which to paint a picture of an industry maturing beyond its “look at us, over here, we are different!” roots. From the sessions held for analysts, a couple of notably aligned moments stand out: the first involving use of the H-word, met with a smattering of laughter as an AWS representative spoke of embracing (my word) hybrid architectures and deploying (in the form of AWS Snowball Edge) capabilities inside the enterprise boundary.

The second, also met with more of an accepting shrug than anything, was a presentation by Keith Jarrett, AWS’ worldwide lead on cloud economics, which accepted, nay endorsed the fact that AWS’ cloud models wouldn’t always be the cheapest option for everything. Any thoughts of “ah-HA! Got you!” were almost immediately overtaken by, “Well, of course, how could it be?” — unless someone has also invented the perpetual motion machine or some other magical device. At a risk of repeating the obvious, there is no silver bullet/single solution/one-model-to-rule-them all in technology, never has been and never will be. Keith went on to present a series of KPIs around value creation, rather than pure cost.

So, with maturity comes the circumspection of understanding one’s place in the world, what one brings to the party, and therefore a level of differentiation based on competence, not capability: in a nutshell, it’s not about “use cloud” but, “if you want to use cloud-based services, work with us as we do it better than anyone else.” We saw this across the AWS portfolio, for example through the repeated theme of ‘frameworks’ — AWS has one AI (as presented by Swami Sivasubramanian, VP, Amazon Machine Learning), one for IoT (thank you Dirk Didascalou, VP, AWS IoT), one for more general cloud adoption (hat-tip Dave McCann, VP, AWS Marketplace and Todd Weatherby, VP, AWS Professional Services).

It all makes sense — if the platform is (increasingly) a commodity, the differentiator becomes how it is used. We see this over and again: now that Kubernetes is (becoming) the de facto target for containerised applications for example, to say “we do Kubernetes” is no longer interesting. Nor for that matter are the frameworks, from a business perspective — illustrated by the current trend away from DevOps being an end in itself and towards governance models and tooling such as Value Stream Management. Most important are whether organisations can innovate and deliver faster, harness opportunities, deliver new customer experiences and generate business value, more effectively with one provider or another.

This is all good news for the enterprise, as the terminology and philosophical underpinnings of cloud computing increasingly align with the more traditional thinking pervading our largest organisations. Across the past ten years, it has been enough to ‘do’ cloud, or ‘do’ open source in order to create competitive advantage: indeed, upstart organisations (the usual suspects of AirBnB, Uber, indeed Amazon et al) have built their businesses on the basis of rapid time-to-value. Simply put, older companies, with all their meetings, legacy systems and indeed thinking, have not been able to deliver as quickly as businesses without all that baggage.

Indeed, they still can’t. But them old companies are still there, for a number of reasons. First that the new breed have largely tackled the customer-facing elements of business, but there’s only so much of that to go around. It is completely unsurprising that Amazon is opening (albeit automated) shops, and that Uber (together with Toyota) is investing in (driverless) car fleets: someone has to do the infrastructure stuff. Meanwhile, not all customer-oriented business can be done on an ad-hoc basis. Take Healthcare for example, which (thank goodness) has not thrown itself gaily into adopting the heck-why-not-throw-away-the-old-rules-and-see-what-happens business models of the platform economy.

And indeed, while big old businesses are still big and old, and therefore unable to act quite so responsively as the youngsters, three things are happening: not only are they getting better at that whole innovation thing — or indeed, learning how to align new models of innovation with their own approaches, but also, the younger companies are having to learn that they can’t get away with avoiding complexity for ever. And in parallel, as we have already seen, technology providers such as AWS are maturing to fit the evolving needs and capabilities of both sides. It’s not just the big players: at Re:Invent I was also able to talk to both organisations in Amazon’s partner ecosystem and their customers, notably a conversation with that quite popular gaming company Fortnite about both AWS and MongoDB.

Where does this leave us? First that AWS is establishing itself not as a cloud player but as a technology provider, and rightly so, moving away from a false debate based on cost and towards one based on value. Second, AWS recognises that it cannot go it alone, nor does it need to (historically echoing of Microsoft’s attempts to play the better together card, which worked to an extent but could never be the whole answer). Third, and taking into account the fact that AWS is not the only game in town, that this reflects a more general maturing of the industry’s relationship with business, as attention moves beyond the platform and towards how to get the most out of it in what, frankly, a highly complex and constantly evolving world.

Whatever happens, complexity of all types will continue to constrain our ability to maximise the value we can get from technology. While technological complexity may appear to be a Gordian knot, it is more a Hydra — cut off one head and many more grow back. Understanding this and trying to tame and align as a platform, rather than looking to restrict and present one model above all, holds the key to unlocking future innovation for businesses of all sizes.

HT to my colleague Enrico Signoretti for his report Alternatives to Amazon AWS S3.

Voices in AI – Episode 75: A Conversation with Kevin Kelly


About this Episode

Episode 75 of Voices in AI features host Byron Reese and Kevin Kelly discuss the brain, the mind, what it takes to make AI and Kevin’s thoughts on its inevitability. Kevin has written books such as ‘The New Rules for a New Economy’, ‘What Technology Wants’, and ‘The Inevitable’. Kevin also started Wired Magazine, an internet and print magazine of tech and culture.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, and I’m Byron Reese. Today I am so excited we have as our guest, Kevin Kelly. You know when I was writing the biography for Kevin, I didn’t even know where to start or where to end. He’s perhaps best known for a quarter of a century ago, starting Wired magazine, but that is just one of many many things on an amazing career [path]. He has written a number of books, The New Rules for a New Economy, What Technology Wants, and most recently, The Inevitable, where he talks about the immediate future. I’m super excited to have him on the show, welcome Kevin.
Kevin Kelly: It’s a real delight to be here, thanks for inviting me.
So what is inevitable?
There’s a hard version and a soft version, and I kind of adhere to the soft version. The hard version is kind of a total deterministic world in which if we rewound the tape of life, it all unfolds exactly as it has, and we still have Facebook and Twitter, and we have the same president and so forth. The soft version is to say that there are biases in the world, in biology as well as its extension into technology, and that these biases tend to shape some of the large forms that we see in the world, still leaving the particulars, the specifics, the species to be completely, inherently, unpredictable and stochastic and random.
So that would say that things like you’re going to find on any planet that has water, you’ll find fish, it has life and in water you’ll find fish, or will things, if you rewound the tape of life you’d probably get flying animals again and again, but you’ll never, but I mean, a specific bird, a robin is not inevitable. And the same thing with technology. Any planet that discovers electricity and mixed wires will have telephones. So telephones are inevitable, but the iPhone is not. And the internet’s inevitable, but Google’s not. AI’s inevitable, but the particular variety or character, the specific species of AI is not. That’s what I mean by inevitable—that there are these biases that are built by the very nature of chemistry and physics, that will bend things in certain directions.
And what are some examples of those that you discuss in your book?
So, technology’s basically an extension of the same forces that drive life, and a kind of accelerated evolution is what technology is. So if you ask the question about what are the larger forces in evolution, we have this movement towards complexity. We have  a movement towards diversity; we have a movement towards specialization; we have a movement towards mutualism. Those also are happening in technology, which means that all things being equal, technology will tend to become more and more complex.
The idea that there’s any kind of simplification going on in technology is completely erroneous, there isn’t. It’s not that the iPhone is any simpler. There’s a simple interface. It’s like you have an egg, it’s a very simple interface but inside it’s very complex. The inside of an iPhone continues to get more and more complicated, so there is a drive that, all things being equal, technology will be more complex and then next year it will be more and more specialized.
So, the history of technology in photography was there was one camera, one kind of camera. Then there was a special kind of camera you could do for high speed; maybe there’s another kind of camera that could do underwater; maybe there was a kind that could do infrared; and then eventually we would do a high speed, underwater, infrared camera. So, all these things become more and more specialized and that’s also going to be true about AI, we will have more and more specialized varieties of AI.
So let’s talk a little bit about [AI]. Normally the question I launch this with—and I heard your discourse on it—is: What is intelligence? And in what sense is AI artificial?
Yes. So the big hairy challenge for that question is, we humans collectively as a species at this point in time, have no idea what intelligence really is. We think we know when we see it, but we don’t really, and as we try to make artificial synthetic versions of it, we are, again and again, coming up to the realization that we don’t really know how it works and what it is. Their best guess right now is that there are many different subtypes of cognition that collectively interact with each other and are codependent on each other, form the total output of our minds and of course other animal minds, and so, I think the best way to think of this is we have a ‘zoo’ of different types of cognition, different types of solving things, of learning, of being smart, and that collection varies a little bit by person to person and a lot between different animals in the natural world and so…
That collection is still being mapped, and we know that there’s something like symbolic reasoning. We know that there’s kind of deductive logic, that there’s something about spatial navigation as a kind of intelligence. We know that there’s mathematical type thinking; we know that there’s emotional intelligence; we know that there’s perception; and so far, all the AI that we have been ‘wowed’ by in the last 5 years is really all a synthesis of only one of those types of cognition, which is perception.
So all the deep learning neural net stuff that we’re doing is really just varieties of perception of perceiving patterns, and whether there’s audio patterns or image patterns, that’s really as far as we’ve gotten. But there’s all these other types, and in fact we don’t even know what all the varieties of types [are]. We don’t know how we think, and I think one of the consequences of AI, trying to make AI, is that AI is going to be the microscope that we need to look into our minds to figure out how they work. So it’s not just that we’re creating artificial minds, it’s the fact that that creation—that process—is the scope that we’re going to use to discover what our minds are made of.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Five Questions For… Seong Park at MongoDB

MongoDB came onto the scene alongside a number of data management technologies, all of which emerged on the basis of: “You don’t need to use a relational database for that.” Back in the day, SQL-based approaches became the only game in town first due to the way they handled storage challenges, and then a bunch of open source developers came along and wrecked everything. So we are told.
Having firmly established itself in the market and proved that it can deliver scale (Fortnite is a flagship customer), the company is nonetheless needing to move with the times. Having spoken to Seong Park, VP of Product Marketing & Developer Advocacy, several times over the past 6 weeks, I thought it was worth capturing the essence of our conversations.
Q1: How do you engage with developers that is the same, or different to data-oriented engineers? Traditionally these have been two separate groups to be treated separately, is this how you see things?
MongoDB began as the solution to a problem that was increasingly slowing down both developers and engineers: the old relational database simply wasn’t cutting the mustard anymore. And that’s hardly surprising, since the design is more than 40 years old.
MongoDB’s entire approach is about driving developer productivity, and we take an object-focused approach to databases. You don’t think of data stored across tables, you think of storing info that’s associated, and you keep it together. That’s how our database works.
We want to make sure that developers can build applications. That’s why we focus on offering uncompromising user experiences. Our solution should be as easy, seamless, simple, effective and productive as possible. We are all about enabling developers to spend time on the things they care about: developing, coding and working with data in a fast, natural way.
When it comes to DevOps, a core tenet of the model is to create multi-disciplinary teams that can collectively work in small squads, to develop and iterate quickly on apps and microservices. Increasingly, data engineers are a part of that team, along with developers, operations staff, security, product managers, and business owners.
We have built capabilities and tools to address all of those groups. For data engineers, we have in-database features such as the aggregation pipeline that can transform data before processing. We also have connectors that integrate MongoDB with other parts of the data estate – for example, from BI to advanced analytics and machine learning.
Q2: Database structures such as MongoDB are an enabler of DevOps practices; at the same time, data governance can be a hindrance to speed and agility. How do you ensure you help speed things up, and not slow them down?
Unlike other non-relational databases, MongoDB gives you a completely tunable schema – the skeleton representing the structure of the entire database. The benefit here is that the development phase is supported by a flexible and dynamic data model, and when the app goes into production, you can enforce schema governance to lock things down.
The governance itself is also completely tunable, so you can set up your database to support your needs, rather than being constrained by structure. This is an important differentiator for MongoDB.
Another major factor which reduces speed and agility is scale. Over the last two to three years, we have been building mature tooling that enterprises and operators alike will care about, because they make it easy to manage and operate MongoDB, and because they make it easy to apply upgrades, patches and security fixes, even when you’re talking about hundreds of thousands of clusters.
One of the key reasons why we have seen such acceleration in the adoption of MongoDB, not only in the enterprise but also by startups and smaller businesses, is that we make it so easy to get started with MongoDB. We want to make it easy to get to market very quickly, while we’re also focusing on driving down cost and boosting productivity. Our approach is to remove as much friction in the system as possible, and that’s why we align so well with DevOps practices.
In terms of legacy modernization, we are running a major initiative enabling customers to apply the latest innovations in development methodologies, architectural patterns, and technologies to refresh their portfolio of legacy applications. This is much more than just “lift and shift”. Moving existing apps and databases to faster hardware, or on to the cloud might get you slightly higher performance and marginally reduced cost, but you will fail to realize the transformational business agility, scale, or deployment freedom that true legacy modernization brings.
In our experience, by modernizing with MongoDB organizations can build new business functionality 3-5x faster, scale to millions of users wherever they are on the planet, and cut costs by 70 percent and more, all by unshackling from legacy systems.
Q3: Traditionally you’re either a developer or a database person … does this do away with database engineers? Do we need database engineers or can developers do everything?
Developers are now the kingmakers; they are the hardest group of talent to retain. The biggest challenge most enterprises see is about finding and keeping developer talent.
If you are looking for the best experience in working with data, MongoDB is the answer in our opinion! It is not just about the persistence and the database …MongoDB Stitch is a serverless platform, drives integration with third-party cloud services, and enables event-based programming through Stitch triggers.
Ultimately, it comes down to a data platform that any number of roles can use, in their “swim lanes”. With the advent of cloud, it’s so easy for customers not to have to worry about things they did before, since they consume a pay-as-you-go service. Maybe you don’t need a DBA for a project any more: it’s important to allow our users to consume MongoDB in the easiest way possible.
But the bottom line is that we’re not doing away with database engineers, but shifting their role to focus on making a higher-value impact. For engineers we have capabilities and features like the aggregation pipeline, allowing us to transform data before processing.
Q4: IoT-related question … in retail, you want to put AI into the supermarket environment, it could be video surveillance or inventory management. It’s not about distributing across crowd but into the Edge and “fog” computing…
At our recent MongoDB Europe event in London, we announced the general availability of MongoDB Mobile as well as the beta for Stitch Mobile Sync. Since we already have a lot of customers on the network edge (you’ll find MongoDB on oil rigs, across the IoT, used by airlines, and for the management of fleets of cars and trucks), a lot of these elements are already there.
The advantage is how easy we make it to work with that edge data. We’re thinking about the experience we provide in terms of working with data – and giving people access to what they care about – tooling, integration, and to look at what MongoDB can provide natively on a data platform.
Q5: I’m interested to know what proportion of your customer base, and/or data/transaction base, are ‘cloud native’ versus more traditional enterprises. Indeed, is this how you segment your customers, and how do you engage with different groups that you do target?
We’d argue that every business should become cloud native – and many traditional enterprises are on that journey.
Around 70 percent of all MongoDB deployments are on a private or public cloud platform, and from a product portfolio perspective, we work to cover the complete market – from startup programs to self-service cloud services, to corporate and enterprise sales teams. As a result, we can meet customers wherever they are, and whatever their size.
My take: better ways exist, but how to preach to the non-converted?
Much that we see around us in technology is shaped as a result of the constraints of its time. Relational databases enabled a step up from the monolithic data structures of the 1970s (though of course, some of the latter are still running, quite successfully), in no small part by enabling more flexible data structures to exist. MongoDB took the same idea one step further, doing away with the schema completely.
Is the MongoDB model the right answer for everything? No, and that would never be the point – nor are relational models, nor any other data management structures (including the newer capabilities in MongoDB’s stable). Given that data management vendors will continue to innovate, more important is choosing the right tool for the job, or indeed, being able to move from one model to another if need be.
This is more about mindset, therefore. Traditional views of IT have been to use the same technologies and techniques, because they always worked before. Not only does this risk trying to put square pegs in round holes, but also it can mean missed opportunities if the definition of what is possible is constrained by what is understood.
I would love to think none of this needs to be said, but in my experience large organisations still look backward more than they look forward, to their loss. We often talk about skills in data science, the shortage of developers and so on, but perhaps the greater gap is in senior executives that get the need for an engineering-first mindset. If we are all software companies now, we need to start acting accordingly.

Five Questions for Melissa Kramer of Live UTI Free

While the notion of healthcare technology may be in the spotlight with AI, blockchain and all that, the coalface of care requires building an understanding of patient needs and responding in an appropriate way. Today, in many cases, even some of the most common conditions are subject to a dearth of information, or worse, misinformation that results in poor diagnosis and treatment. I learned this when working with a London hospital on care pathways for DVT; I was naturally interested in the work of Live UTI Free, which offers a clear information resource for patients, practitioners and indeed, researchers.
Read on to learn from Melissa Kramer, founder, how not all technological innovations need to maximize the use of buzzwords or bandwagons, and what lessons can be learned across healthcare diagnostics and beyond. 

1. Let’s set some context — what’s the purpose behind Live UTI Free?

We founded Live UTI Free to address a gap in the sharing of evidence-based information to sufferers of recurrent and chronic urinary tract infection (UTI).
To provide some context for why closing that gap is important, 1 in 2 females will experience a UTI in their lifetime, and of those, up to 44% will suffer a recurrence. With each recurrence, the chance of another increases. For many, recurrent UTI is debilitating, and the impact extends to the economy, with billions spent each year on UTI alone.
Despite how common UTI is, there has never been an accurate, standard method of UTI testing.
Although the impact of this issue is significant on many levels, UTI remains an area of women’s health that suffers from steadfastly held misinformation on both sides of the patient/practitioner relationship.
We aim to act as a conduit of information between researchers and patients, bridging gaps in knowledge where possible and shedding light on potential avenues for better diagnosis and treatment. Ultimately, our goal is to use our insights to advance research and development in this space.

2. How do you go about collating and delivering information, or is it ‘simply’ that even the most straightforward info is difficult to find today?

We created our platform because we identified how difficult it was for patients to find straightforward information online, and we wanted to fix this. In order to do so, we first had to collect information from patients themselves, to discover what it was they were looking for and how.
We spent more than 6 months interviewing patients and learning about their online behavior, before we put a single piece of information online. This activity alone meant we had collected more patient-perspective data on the subject than most recent studies.
Once we understood the typical patient journey, and where the glitches were, we started to collate scientific evidence and to interpret it into everyday language. We do this with the help of researchers, but the process is hardly straightforward.
If we relied on peer-reviewed studies alone, there would be little we could offer our audience in terms of new diagnosis and treatment options. Instead, we’ve developed our offering via a combination of studies, and direct input from practitioners, researchers and pharmacists.
This requires a continuous loop of interviews, academic research, and amendments to the information we provide. And on top of that is another layer of patient feedback that directly shapes what we offer on our site.
Long story short: straightforward info, particularly on health topics, is difficult to find. But once you do find it, you also have to make sure it’s useful to whoever it’s intended for.

3. What mechanisms do you have to do this, beyond the online site and do you think your user-centric approach has been worthwhile?

Aside from the patient interviews mentioned in the last question, we also launched a patient quiz at the same time as launching our site. The quiz has served two purposes:

  1. First, it has allowed us to help direct users to the most pertinent information, based on their current knowledge and experience.
  2. At the same time, we have collected thousands of data points that, when aggregated, provide incredible insight into patient experience, why people use our site, and what we can do better. Our approach has culminated in extremely fast growth in traffic to the site, and daily positive feedback.

Beyond the online site, we have developed a network of scientists, practitioners and other medical professionals.
We’re also in regular contact with commercial companies that are working on products or services that address specific aspects of recurrent UTI.
By maintaining a user-centric approach and fostering relationships with other key stakeholders, we hope to provide value that extends beyond problem-solving for individual patients. We have already begun to steer change for those in our network.  

4. What challenges have you faced starting up Live UTI Free, and how have you overcome them?

We are, and always have been, acutely aware of the position we hold in between patients and practitioners, and information that connects the two. Our primary concern revolves around how to achieve our goals, while adhering to the ethical standards we’ve placed upon ourselves. This in itself is a challenge.
We look at everything we do through this ethics lens. We question how any potential partnership or revenue opportunity fits within our own ethical guidelines, and we carefully consider data privacy when it comes to our patient quiz, interviews and correspondence we receive.
To help overcome this challenge we’ve put in place a funding policy and community guidelines, as well as implementing an ethics advisory board to help with these decisions.
A further challenge has been navigating the line between neutral accuracy, and providing information that is actionable for our audience. We don’t provide recommendations of any kind, but we know through our research that patients want a work flow, rather than a ‘choose your own adventure’.
We’ve partially overcome this by constructing our content in such a way that the user is guided through a logical sequence. The rest is a work in progress, as the required scientific study to truly point someone towards action steps for recurrent UTI, is still in the future. When it exists, we’ll be ready to relay the information to our audience.   

5. How do you see things moving into the future?

The data we have collected via our patient quiz is one of a kind, and we’re now starting to use these insights to help guide product R&D for this patient population.
We are currently assessing grant opportunities, in collaboration with researchers, with a focus on patient perspective data. Our reach means we will make a valuable partner in larger research studies and clinical trials, and we’re open to discussion in this regard. We plan to launch an evidence based e-commerce site next year, to bring our many user requests for this to fruition.
Live UTI Free will continue as a user-centric patient advocacy organisation, existing to support our fast-growing community, which includes sufferers of chronic and recurrent UTI, practitioners, and researchers. Readers can get in touch if interested in:

  • Patient perspective data and patient experience
  • Patient recruitment for clinical trials
  • Product development for recurrent UTI sufferers
  • Our practitioner and researcher network


Voices in AI – Episode 74: A Conversation with Dr. Kai-Fu Lee


About this Episode

Episode 74 of Voices in AI features host Byron Reese and Dr. Kai-Fu Lee discussing the potential of AI to disrupt job markets, the comparison of AI research and implementation in the U.S. and China, as well as other facets of Dr. Lee’s book “AI Superpowers”. Dr. Kai-Fu Lee, previously president of Google China, is now the CEO of Sinovation Ventures.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOmI’m Byron Reese. Today I am so excited my guest is Dr. Kai-Fu Lee. He is, of course, an AI expert. He is the CEO of Sinovation Ventures. He is the former President of Google China. And he is the author of a fantastic new book called “AI Superpowers.” Welcome to the show, Dr. Lee. 
Kai-Fu Lee: Thank you Byron.
I love to begin by saying, AI is one of those things that can mean so many things. And so, for the purpose of this conversation, what are we talking about when we talk about AI?
We’re talking about the advances in machine learning… in particular Deep Learning and related technologies as it applies to artificial narrow intelligence, with a lot of opportunities for implementation, application and value extraction. We’re not talking about artificial general intelligence, which I think is still a long way out.
So, confining ourselves to narrow intelligence, if someone were to ask you worldwide, not even getting into all the political issues, what is the state of the art right now? How would you describe where we are as a planet with narrow artificial intelligence?
I think we’re at the point of readiness for application. I think the greatest opportunity is application of what’s already known. If we look around us, we see very few of the companies, enterprises and industries using AI when they all really should be. Internet companies use AI a lot, but it’s really just beginning to enter financial, manufacturing, retail, hospitals, healthcare, schools, education and so on. It should impact everything, and it has not.
So, I think what’s been invented and how it gets applied/implemented/monetized… value creation, that is a very clear 100% certain opportunity we should embrace. Now, there can be more innovations, inventions, breakthroughs… but even without those I think we’ve got so much on our hands that’s not yet been fully valued and implemented into industry.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

On Value Stream Management in DevOps, and Seeing Problems as Solutions

You know that thing when a term emerges and you kind of get what it means, but you think you’d better read up on it to be sure? Well, so it was for me with Value Stream Management, as applied to agile development in general and DevOps in particular. So, I have done some reading up.

Typically, there seems to be some disagreement around what the phrase means. A cursory Google of “value stream DevOps” suggests that “value stream mapping” is the term du jour, however a debate continues on the difference between “value stream mapping” (ostensibly around removing waste in lean best practices) and “value streams” – for once, and for illustration purposes only, I refer to Wikipedia: “While named similarly, Lean value stream mapping is a process-based practice that seeks to identify waste, whereas value streams provide a higher-level overview of how a stakeholder receives value.”

Value streams are also seen (for example in this 2014 paper) as different to (business) processes, in that they are about making sure value is added, versus being about how things are done. This differentiation may help business architects, who (by nature) like precision in their terminology. However the paper also references Hammer & Champy’s 1993 definition of a process, which specifically mentions value: “Process is a technical term with a precise definition: an organized group of related activities that together create a result of value to the customer.” Surely a process without value is no process at all?

Meanwhile analysts such as Forrester have settled on Value Stream Management, which they reference as “an emerging market” even though at least some of the above has been around for a hundred years or so. Perhaps none of the terminological debate matters, at least to the people trying to do things with whatever the term means. Which is what, precisely? The answer lies in the restating of a problem as a solution: if value stream management is the answer, the challenge comes from a recognition that things are not working as well as they could be, and therefore are not delivering value as a result.

In the specific instance of DevOps, VSM can be seen as a direct response to the challenge of DevOps Friction, which I write about in this report. So, how does the pain manifest itself? The answer is twofold. For people and organisations who are already competent at DevOps, particularly those cloud-native organisations who are DevOps-by-default (and might wonder what other approach might exist), the challenge is knowing whether specific iterations, sprints and releases are of maximum benefit, delivering something of use as efficiently as possible.

In this instance, the discipline of value stream management acts as Zen master, asking why things are as they are and whether they can be improved. Meanwhile the ‘emerging market’ of VSM refers to tooling which smooths and simplifies development and operational workflows, enabling the discipline to be implemented and hopefully maximising value as a result. Which gives us another “problem-as-solution” flip — while many of the tools available today are API-based, enabling their integration into workflows, they have not always been built with end-to-end value delivery in mind.

A second group feeling the pain concerns organisations that see DevOps as an answer, but are yet to harness it in a meaningful way beyond individual initiatives — many traditional enterprises tend to fall into this category, and we’ve held various webinars about helping organisations scale their DevOps efforts. For these groups, value stream management offers an entry point: it suggests where effort should be focused, not as DevOps as an end in itself but as a means for delivering increased, measurable value out of software.

In addition, it creates a way of thinking about DevOps as practical workflows, enabled by automation tools, as opposed to ‘just’ a set of philosophical constructs. The latter are fine, but without some kind of guidance, organisations can be left with a range of tooling options but no clear idea about how to make sure they are delivering. It’s for this reason that I was quite keen on GitHub’s announcement around actions, a couple of weeks ago: standardisation, around not just principles, but also processes and tools, is key to efficiency.

The bottom line is that, whatever the terminology, we are moving away from thinking that ‘DevOps is the answer’ and towards ‘implementing the right kind of DevOps processes, with the right tools, to deliver higher levels of value’. Whether about principles or tooling, value stream management can therefore be filed in the category of concepts that, when they are working right, they cease to exist. Perhaps this will become true in the future but right now, we are a long way from that point.

Afternoon: If you want to read up on the notions of value management as applied to business outcomes, I can recommend this book by my old consulting colleague Roger Davies.

Voices in AI – Episode 73: A Conversation with Konstantinos Karachalios


About this Episode

Episode 73 of Voices in AI features host Byron Reese and Konstantinos Karachalios discuss what it means to be human, how technology has changed us in the far and recent past and how AI could shape our future. Konstantinos holds a PhD in Engineering and Physics from the University of Stuttgart, as well as being the managing director at the IEEE standards association.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, and I’m Byron Reese. Today our guest is Konstantinos Karachalios. He is the Managing Director at the IEEE Standards Association, and he holds a PhD in Engineering and Physics from the University of Stuttgart. Welcome to the show.

Konstantinos Krachalios: Thank you for inviting me.

So we were just chatting before the show about ‘what does artificial intelligence mean to you?’ You asked me that and it’s interesting, because that’s usually my first question: What is artificial intelligence, why is it artificial and feel free to talk about what intelligence is.

Yes, and first of all we see really a kind of mega-wave around the ‘so-called’ artificial intelligence—it started two years ago. There seems to be a hype around it, and it would be good to distinguish what is marketing, what is real, and what is propaganda—what are dreams what are nightmares, and so on. I’m a systems engineer, so I prefer to take a systems approach, and I prefer to talk about, let’s say, ‘intelligent systems,’ which can be autonomous or not, and so on. The big question is a compromise because the big question is: ‘what is intelligence?’ because nobody knows what is intelligence, and the definitions vary very widely.

I myself try to understand what is human intelligence at least, or what are some expressions of human intelligence, and I gave a certain answer to this question when I was invited in front of the House of the Lords testimony. Just to make it brief, I’m not a supporter of the hype around artificial intelligence, also I’m not even supporting the term itself. I find it obfuscates more than it reveals, and so I think we need to re-frame this dialogue, and it takes also away from human agency. So, I can make a critique to this and also I have a certain proposal.

Well start with your critique If you think the term is either meaningless or bad, why? What are you proposing as an alternative way of thinking?

Very briefly because we can talk really for one or two hours about this: My critique is that the whole of this terminology is associated also with a perception of humans and of our intelligence, which is quite mechanical. That means there is a whole school of thinking, there are many supporters there, who believe that humans are just better data processing machines.

Well let’s explore that because I think that is the crux of the issue, so you believe that humans are not machines?

Apparently not. It’s not only we’re not machines, I think, because evidently we’re not machines, but we’re biological, and machines are perhaps mechanical although now the boundary has blurred because of biological machines and so on.

You certainly know the thought experiment that says, if you take what a neuron does and build an artificial one and then you put enough of them together, you could eventually build something that functions like the brain. Then wouldn’t it have a mind and wouldn’t it be intelligent, and isn’t that what the human brain initiative in Europe is trying to do?

This is weird, all this you have said starts with a reductionist assumption about the human—that our brain is just a very good computer. It ignores really the sources of our intelligence, which are really not all in our brain. Our intelligence has really several other sources. We cannot reduce it to just the synapses in the neurons and so on, and of course, nobody can prove this or another thing. I just want to make clear here that the reductionist assumption about humanity is also a religious approach to humanity, but a reductionist religion.

And the problem is that people who support this, they believe it is scientific, and this, I do not accept. This is really a religion, and a reductionist one, and this has consequences about how we treat humans, and this is serious. So if we continue propagating a language which reduces humanity, it will have political and social consequences, and I think we should resist this and I think the best way to express this is an essay by Joichi Ito with the title which says “Resist Reduction.” And I would really suggest that people read this essay because it explains a lot that I’m not able to explain here because of time.

So you’re maintaining that if you adopt this, what you’re calling a “religious view,” a “reductionist view” of humanity, that in a way that can go to undermine human rights and the fact that there is something different about humans that is beyond purely humanistic.

For instance I was in an AI conference of a UN organization which brought all other UN organizations with technology together. It was two years ago, and there they were celebrating a humanoid, which was pretending to be a human. The people were celebrating this and somebody there asked this question to the inventor of this thing: “What do you intend to do with this?” And this person spoke publicly for five minutes and could not answer the question and then he said, “You know, I think we’re doing it because if we don’t do it, others were going to do it, it is better we are the first.”

I find this a very cynical approach, a very dangerous one and nihilistic. These people with this mentality, we celebrate them as heroes. I think this is too much. We should stop doing this anymore, we should resist this mentality, and this ideology. I believe we make machine a citizen, you treat your citizens like machines, then we’re not going very far as humanity. I think this is a very dangerous path.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

From Storage to Data Virtualization

Do you remember Primary Data? Well, I loved the idea and the team but it didn’t go very well for them. It’s likely there are several reasons why it didn’t. In my opinion, it boiled down to the fact that very few people like storage virtualization. In fact, I expressed my fondness for Primary Data’s technology several times in the past, but when it comes to changing the way to operate complex, siloed, storage environments you come across huge resistance, at every level!
The good news is that Primary Data’s core team is back, with what looks like a smarter version of the original idea that can easily overcome the skepticism surrounding storage virtualization. In fact, they’ve moved beyond it and presented what looks like a multi-cloud controller with data virtualization features. Ok, they call it “Data as a Service,” but I prefer Data Virtualization…and being back with the product is a bold move.
Data Virtualization (What and Why)
I’ve begun this story by mentioning Primary Data first, because David Flynn (CEO of HammerSpace and former CTO of Primary Data) did not start this new Hammerspace venture from scratch. He bought the code which belonged to Primary Data and used it to build the foundation of his new product. That allowed him and his team to get on the market quickly with the first version of HammerSpace in a matter of months instead of years.
HammerSpace is brilliant just for one reason. It somehow solves or, better, hides the problem of data gravity and allows their Data-as-a-Service platform to virtualize data sets by presenting virtualized views of them available in a multi-cloud environment through standard protocols like NFS or S3.
Yes, at first glance it sounds like hot air and a bunch of buzzwords mixed together, but this is far from being the case here… watch the demo in the following video if you don’t trust me.
The solution is highly scalable and aimed at Big Data analytics and other performance workloads for which you need data close to the compute resource quickly, without thinking too much about how to move, sync, and keep it updated with changing business needs.
HammerSpace solutions have several benefits but the top two on my list are:

  • The minimization of egress costs: This is a common problem for those working in multi-cloud environments today. With HammerSpace, only necessary data is moved where it is really needed.
  • Reduced latency: It’s crazy to have an application running on a cloud that is far from where you have your data. Just to make an example, the other day I was writing about Oracle cloud, and how  good they are at creating high-speed bare-metal instances at a reasonable cost. This benefit can be easily lost if your data is created and stored in another cloud.

The Magic of Data Virtualization
I won’t go through architectural and technical details, since there are videos and documentation on HammerSpace’s website that address them (here and here).  Instead, I want to mention one of the features that I like the most: the ability to query the metadata of your data volumes. These volumes can be anywhere, including your premises, and you can get a result in the form of a new volume that is then kept in sync with the original data. Everything you do on data and metadata is quickly reflected on child volumes. Isn’t it magic?
What I liked the least, even though I understand the technical difficulties in implementing it, is that this process is one-way when a local NAS is involved… meaning that it is only a source of data and can’t be synced back from the cloud. There is a workaround, however, and it might be solved in future releases of the product.
Closing the Circle
HammerSpace exited stealth mode only a few days ago. I’m sure that by digging deeper into the product, flaws and limitations will be found.t is also true that the more advanced features are still only sketched on paper. But I can easily get excited by innovative technologies like this one and I’m confident that these issues will be fixed over time. I’ve been keeping an eye on multi-cloud storage solutions for a while, and now I’ve added Hammerspace to my list.
Multi-cloud data controllers and data virtualization are the focus of an upcoming report I’m writing for GigaOm Research. If you are interested in finding out more about how data storage is evolving in the cloud era, subscribe to GigaOm Research for Future-Forward Advice on Data-Driven Technologies, Operations, and Business Strategies.

Voices in AI – Episode 72: A Conversation with Irving Wladawsky-Berger


About this Episode

Episode 72 of Voices in AI features host Byron Reese and Irving Wladawsky-Berger discuss the complexity of the human brain, the possibility of AGI and its origins, the implications of AI in weapons, and where else AI has and could take us. Irving has a PhD in Physics from the University of Chicago, is a research affiliate with the MIT Sloan School of Management, he is a guest columnist for the Wall Street Journal and CIO Journal, he is an agent professor of the Imperial College of London, and he is a fellow for the Center for Global Enterprise.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, and I’m Byron Reese. Today our guest is Irving Wladawsky-Berger. He is a bunch of things. He is a research affiliate with the MIT Sloan School of Management. He is a guest columnist for the Wall Street Journal and CIO Journal. He is an adjunct professor of the Imperial College of London. He is a fellow for the Center for Global Enterprise, and I think a whole lot more things. Welcome to the show, Irving.
Irving Wladawsky-Berger: Byron it’s a pleasure to be here with you.
So, that’s a lot of things you do. What do you spend most of your time doing?
Well, I spend most of my time these days either in MIT-oriented activities or writing my weekly columns, [which] take quite a bit of time. So, those two are a combination, and then, of course, doing activities like this – talking to you about AI and related topics.
So, you have an M.S. and a Ph.D. in Physics from the University of Chicago. Tell me… how does artificial intelligence play into the stuff you do on a regular basis?
Well, first of all, I got my Ph.D. in Physics in Chicago in 1970. I then joined IBM research in Computer Science. I switched fields from Physics to Computer Science because as I was getting my degree in the ‘60s, I spent most of my time computing.
And then you spent 37 years at IBM, right?
Yeah, then I spent 37 years at IBM working full time, and another three and a half years as a consultant. So, I joined IBM research in 1970, and then about four years later my first management job was to organize an AI group. Now, Byron, AI in 1974 was very very very different from AI in 2018. I’m sure you’re familiar with the whole history of AI. If not, I can just briefly tell you about the evolution. I’ve seen it, having been involved with it in one way or another for all these years.
So, back then did you ever have occasion to meet [John] McCarthy or any of the people at the Dartmouth [Summer Research Project]?
Yeah, yeah.
So, tell me about that. Tell me about the early early days in AI, before we jump into today.
I knew people at the MIT AI lab… Marvin Minsky, McCarthy, and there were a number of other people. You know, what’s interesting is at the time the approach to AI was to try to program intelligence, writing it in Lisp, which John McCarthy invented as a special programming language; writing in rules-based languages; writing in Prolog. At the time – remember this was years ago – they all thought that you could get AI done that way and it was just a matter of time before computers got fast enough for this to work. Clearly that approach toward artificial intelligence didn’t work at all. You couldn’t program something like intelligence when we didn’t understand at all how it worked…
Well, to pause right there for just a second… The reason they believed that – and it was a reasonable assumption – the reason they believed it is because they looked at things like Isaac Newton coming up with three laws that covered planetary motion, and Maxwell and different physical systems that only were governed by two or three simple laws and they hoped intelligence was. Do you think there’s any aspect of intelligence that’s really simple and we just haven’t stumbled across it, that you just iterate something over and over again? Any aspect of intelligence that’s like that?
I don’t think so, and in fact my analogy… and I’m glad you brought up Isaac Newton. This goes back to physics, which is what I got my degrees in. This is like comparing classical mechanics, which is deterministic. You know, you can tell precisely, based on classical mechanics, the motion of planets. If you throw a baseball, where is it going to go, etc. And as we know, classical mechanics does not work at the atomic and subatomic level.
We have something called quantum mechanics, and in quantum mechanics, nothing is deterministic. You can only tell what things are going to do based on something called a wave function, which gives you probability. I really believe that AI is like that, that it is so complicated, so emergent, so chaotic; etc., that the way to deal with AI is in a more probabilistic way. That has worked extremely well, and the previous approach where we try to write things down in a sort of deterministic way like classical mechanics, that just didn’t work.
Byron, imagine if I asked you to write down specifically how you learned to ride a bicycle. I bet you won’t be able to do it. I mean, you can write a poem about it. But if I say, “No, no, I want a computer program that tells me precisely…” If I say, “Byron I know you know how to recognize a cat. Tell me how you do it.” I don’t think you’ll be able to tell me, and that’s why that approach didn’t work.
And then, lo and behold, in the ‘90s we discovered that there was a whole different approach to AI based on getting lots and lots of data in very fast computers, analyzing the data, and then something like intelligence starts coming out of all that. I don’t know if it’s intelligence, but it doesn’t matter.
I really think that to a lot of people the real point where that hit home is when in the late ‘90s, IBM’s Deep Blue supercomputer, beat Garry Kasparov in a very famous [chess]match. I don’t know, Byron, if you remember that.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.