3 Tech Areas in Which Engineers Are Having a Big Impact

We all know that technology has changed the world dramatically in recent years, and continues to disrupt industries of all types, in all locations across the globe. For engineers, and for businesses which employ them and/or use their developments, the meshing of engineering and technology is particularly powerful right now. By pairing humans with computers, some of the most exciting projects going around are currently being released, or are under development.

Whether you’re interested in control systems engineering, biomedical engineering, computer engineering, or another specialty, it’s important to stay up to date on the latest developments. Read on for three key tech areas in which engineers are having a big impact.

Robotics

Robotics is an area which is being heavily invested in by many different types of industries, and engineering is no different. One of the most exciting projects under development is a robotics system called “visual foresights.” While usually robots react to data in real time, responding to things as they happen, researchers at the University of California are working on making it possible for robots to imagine the future based on their actions.
This will mean that the tech will be able to interact proficiently with situations or items they haven’t seen before. For instance, they might be able to predict what their in-built cameras will see if they perform, in a set sequence, a certain set of movements.
At the moment, the predictions robots can make through this visual foresight are only quite limited, and reach into the future by just a few seconds. However, this step forward means that robots can now, and will soon be better able to, learn to perform jobs without having any prior knowledge, or help from humans. This will in turn open up a whole new avenue for how and where robots can be utilized.

3D Printing

Another topical subject is 3D printing. It is also going ahead in leaps and bounds, particularly when it comes to use in medicine. For example, 3D-printed anatomical models are being used more and more to help doctors improve the outcomes of their surgeries. This is because the models help surgeons to practice operations (on specially-created replicas of patient organs) in advance.
While these models have until recently been made of hard plastic, have a different feel to real living tissue, and are tough for surgeons to cut into, things are changing. A team of researchers led by the University of Minnesota has been developing 3D-printed organ models which are more advanced than the older plastic ones.
The new versions actually have the same feel and mechanical properties as living tissue. They’re also better because they can come equipped with soft sensors to provide feedback during practice options. This enables medicos to know when they’re applying the right amount of pressure without damaging fragile tissue. It also makes it easier for surgeons to plan surgeries effectively, and to predict how patient organs will react to and heal from operations. Eventually it’s believed that bionic organs may be able to be printed on demand as required for transplants.

Another big 3D project in the works is the creation of printed objects which can connect to Wi-Fi without the need for electronics. At the University of Washington, teams are developing 3D-printed items, made from plastic, which can connect and talk to, and collate data from, other devices in a building. This is done via the internet, but without the usual need for electronic components.
The engineers at the University replaced some of the electronic functions typically performed by components with mechanical motion with pieces which can be 3D printed. This list includes buttons, springs, knobs, switches, and gears. It is hoped that consumers will one day be able to use their own, domestic 3D printers to create objects out of readily-available plastics, and have these devices communicate wirelessly. For example, a bottle of laundry detergent could sense when the soap is getting low, and automatically connect to the internet to order a refill.

Biomedical Advances

Biomedicine is another exciting field. Apart from the aforementioned 3D-printed organs, engineers in are working on many other developments.
A team of researchers at the University of Texas, in conjunction with others at the University of Reims, are concentrating on complex plasmonic nanovesicles. This is the term for minute capsules which can be taken as a pill. Once swallowed, they navigate the bloodstream and move to a set location in the body to deliver a drug in the exact spot where it’s needed. By hitting the pills with a short laser light pulse once they’re positioned, the researchers believe the nanoparticles will change shape and release their contents on demand.
This innovative drug-delivery system has enormous potential and could truly transform medicine. This is especially the case in the treatment of cancers and the study of the brain, where only certain parts of an organ need to be targeted.
 

Drama-Free Artificial Intelligence

Depending on who’s listening, the current discussion involving the growing role of Artificial Intelligence in business inspires a range of dramatically divergent emotions. There’s often fear, because of what some believe to be AI’s vaguely sci-fi vibe and dystopian possibilities. Among business people, there is also confusion, on account of the inability of most laypeople to separate AI hype from AI fact. Apprehension also looms large, usually from managers who sense that a great wave of technology disruption is about to hit them, but who feel utterly unprepared for it.  
But from our experience with Fortune 500 companies, we’ve come to believe that the proper response by business leaders to AI should be more benign: appreciation. Whatever anxieties it might produce, the fact is that AI is ready today to bring a trio of new efficiencies to the enterprise. Specifically, scores of companies have learned how AI technologies can transform how they process transactions, how they deal with data and how they interact with customers.
Better still, they have been able to take advantage of this AI triad without turning themselves into an Internet Giant and hiring huge new teams of hard-to-find, and not to mention expensive, data scientists. AI products are available now in nearly turnkey form from a growing list of enterprise vendors. True, you and your IT staff will need to do a certain amount of homework to be able to evaluate vendors, and to make sure product implementations map on to your precise business needs. But doing so isn’t a heavy lift, and the effort will likely be rewarded by the new efficiencies AI makes possible.
Companies are benefiting from AI right now, in ways that are making a difference on both the top and bottom line.
“Robotic and Cognitive Automation” is the name we at Deloitte give to AI’s ability to automate a huge swath of work that formerly required hands-on attention from human beings. The most popular form of R&CA involves gathering data from disparate sources and bringing them together in a single document. An invoice, for example, usually cites a number of sources, each of which stores relevant information in slightly different formats. An R&CA system has the intelligence necessary to transcend the usual literal-mindedness of computer systems, and process the information it needs despite the fact that it might have different representations in different places.
As AI techniques have become more robust in recent years, so too have the capabilities of R&CA packages. Now, instead of simply pulling spreadsheet-type data from sundry sources, they can process whole passages of text. Not as well as a human being can, for sure, but enough to get a general sense of the topics that are being covered. As a result, there are now R&CA systems that can “read” through emails and flag those that might be relevant to a particular issue. Such systems are now commonly found, for example, at large law practices, which use them to search through huge email libraries to discover which materials might need to be produced in connection with a particular bit of litigation. This is the sort of routine work that previously required paralegals.
Another cluster of AI applications involves the ability to make better use of a company’s data; these go by the name of “Cognitive Insights.” These tools allow companies to manage the flood of information they collect every day, from business reporting tools to social media accounts. More importantly, it gives businesses the ability to use that information to generate real-time insights and actions.
Consider just one area in which these new AI capabilities can be useful: digital marketing. Staffers running email campaigns can now improve click-through rates by using their AI-acquired knowledge of each customer’s personality to determine which words or phrases in the subject line might be more likely to get the person to read the email. Small changes can make a big difference; reports of double-digit increases in opened emails are common with AI.
Finally, AI is fundamentally changing the way companies work with their customers. This is occurring everywhere, but is most common with interactions with millennials. This cohort grew up with texting on their mobile phones, and is often more comfortable interacting with an app than with a human being.
As a result, millennials are extremely receptive to a new breed of automated customer service applications that AI is making possible. (These are vastly superior to the rudimentary “chatbots” that some companies used in the early days of the Web.) With advances in the AI field known as Natural Language Processing, computers are now able to deal with the sorts of real-world questions that customers are likely to ask, such as “Why is this charge on my credit card statement?” Deploying computers for these types of routine inquiries allow companies to deliver a uniform, high-quality customer experience while simultaneously improving the value of your brand.
You’ve probably noticed that while AI is often described as the equivalent of “thinking machines,” all of the tasks described above are relatively discrete and well-defined. That’s because for all the progress that’s been made in AI, the technology that still doesn’t come close to being able to match human intelligence. AI products perform specific tasks just fine, but don’t expect them (yet) to handle everyday human skills like professional judgment and common sense.
What’s more, AI can’t be used to paper over inefficiencies in a business, whether they be strategic or operational. If the processes you’re using AI for aren’t fundamentally sound to begin with, the new technology won’t be of any help, and may exacerbate problems by hiding them behind added layers of software. You’ll need to use some old-fashioned intelligence to take a good, hard look at your organization before trying to take advantage of the new, artificial variety. It will, though, be well worth the effort.

Jeff Loucks is the executive director of the Deloitte Center for Technology, Media and TelecommunicationsIn his role, he conducts research and writes on topics that help companies capitalize on technological change. An award-winning thought leader in digital business model transformation, Jeff is especially interested in the strategies organizations use to adapt to accelerating change. Jeff’s academic background complements his technology expertise. Jeff has a Bachelor of Arts in political science from The Ohio State University, and a Master of Arts and PhD in political science from the University of Toronto.

++

Mic Locker is a director with Deloitte Consulting LLP and leader of its Enterprise Model Design practice. With more than 15 years of consulting experience, and more than three years of operations experience, she specializes in leading organizations through transformational changes ranging from business model redesign and capability alignment, process reinvention, operational cost reduction, and new business/product launches.

Four Questions For: Ryan Calo

How do you draw the line between prosecuting a robot that does harm and its creator? Who bears the burden of the crime or wrongdoing?
I recently got the chance to respond to a short story by a science fiction writer I admire. The author, Paulo Bacigalupi, imagines a detective investigating the “murder” of a man by his artificial companion. The robot insists it killed its owner intentionally in retaliation for abuse and demands a lawyer.
Today’s robots are not likely to be held legally responsible for their actions. The interesting question is whether anyone will be. If a driverless car crashes, we can treat the car like a defective product and sue the manufacturer. But where a robot causes a truly unexpected harm, the law will struggle. Criminal law looks for mens rea, meaning intent. And tort law looks for foreseeability.
If a robot behaves in a way no one intended or foresaw, we might have a victim with no perpetrator. This could happen more and more as robots gain greater sophistication and autonomy.
Do tricky problems in cyber law and robotics law keep you awake at night?
Yes: intermediary liability. Personal computers and smart phones are useful precisely because developers other than the manufacturer can write apps for them. Neither Apple nor Google developed Pokemon Go. But who should be responsible if an app steals your data or a person on Facebook defames you? Courts and lawmakers decided early on that the intermediary—the Apple or Facebook—would not be liable for what people did with the platform.
The same may not be true for robots. Personal robotics, like personal computers, is likely to rise or fall on the ingenuity of third party developers. But when bones instead of bits are on the line—when the software you download can touch you—courts are likely to strike a different balance. Assuming, as I do, that the future of robotics involves robot app stores, I am quite concerned that the people that make robots will not open them up to innovation due to the uncertainty of whether they will be held responsible if someone gets hurt.
Would prosecution against someone who harms a robotic be different than someone who harms a non-thinking or non-intelligent piece of machinery?
It could be. The link between animal abuse and child abuse, for instance, is so strong that many jurisdictions require authorities responding to an animal abuse allegation to alert child protective services if kids are in the house. Robots elicit very strong social reactions. There are reports of soldiers risking their lives on the battlefield to rescue a robot. In Japan, people have funerals for robotic dogs. We might wonder about a person who abuses a machine that feels like a person or a pet. And, eventually, we might decide to enhance penalties for destroying or defacing a robot beyond what we usually levy for vandalism. Kate Darling has an interesting paper on this.
Should citizens be concerned about robotic devices in their home compromising their privacy or about hackers attacking medical their medial devices? How legitimate or illegitimate are people’s fears about the rise of technology?
People should be concerned about robots and artificial intelligence but not necessarily for the reasons they read about in the press. Kate Crawford of Microsoft Research and I have been thinking through how society’s emphasis on the possibility of the Singularity or a Terminator distorts the debate surrounding the social impact of AI. Some think that superintelligent AI could be humankind’s “last invention.” Many serious computer scientists working in the field scoff at this, pointing out that AI and robotics are technologies still in their infancy. But despite AI’s limits, these same experts advocate introducing AI into some of our most sensitive social contexts such as criminal justice, finance, and healthcare. As my colleague Pedro Domingos puts it: The problem isn’t that AI is too smart and will take over the world. It’s that it is too stupid and already has.
Ryan Calo
Ryan Calo is a law professor at the University of Washington and faculty co-director of the Tech Policy Lab, a unique, interdisciplinary research unit that spans the School of Law, Information School, and Department of Computer Science and Engineering. Calo holds courtesy appointments at the University of Washington Information School and the Oregon State University School of Mechanical, Industrial, and Manufacturing Engineering. He has testified before the U.S. Senate and German Parliament and been called one of the most important people in robotics by Business Insider. This summer, he helped the White House organize a series of workshops on artificial intelligence.
@rcalo on Twitter
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2402972
http://www.slate.com/articles/technology/future_tense/2016/04/a_robotics_law_expert_on_paolo_bacigalupi_s_mika_model.html

Four Questions For: Daniela Rus

You have said before that you believe robots will be as commonplace as smartphones. How do you envision robots to be used in people’s everyday lives?
Recent years have seen major advances in fields that are vital for producing useful robots. Computer vision researchers are developing algorithms that allow machines to “see” increasingly more like humans. Experts in manipulation and control are creating increasingly more agile robots that can do increasingly more dexterous tasks. Research in natural language processing is creating intuitive ways for machines and humans to interact.
Taken all together, this work has gotten us closer than ever to a world where robots will be able to help us with everything from space exploration and search-and-rescue operations to manufacturing and folding our laundry!
Robots will help with physical tasks in a way that is analogous to how smartphones help with computational and data tasks. Imagine a world where anyone can design and build their own robot, customized for their needs. This would could create a whole new industry of “24-hour manufacturing” – people could go to a 24-hour robot-manufacturing store to customize a robot to their own needs.
For example, say you want to retrieve a ring from a vent, or tidy the toys on your floor. From this task specification, the store will be able to create a robot quickly and at low cost. We do not have these capabilities today, but we are developing technologies that will someday enable us to create our own robots like that.
How do you create limits that safeguard humans against being harmed by robots?
We need technological solutions and policies. We should always be cognizant of new technologies’ potential for both good and bad. When airbags came out, many people were understandably concerned that they occasionally malfunctioned, causing injuries or worse. But we now accept that the many lives that airbags save far outweigh the harm they cause.
I believe that in the coming years we will go through a similar process with robots. That said, I’m glad that there has been a healthy debate to make sure that we implement clear guidelines on robotic technologies and how they are used.
My team is acutely aware of the fact that traditional factory robots are caged to make sure that we are safe around them. Our recent research has explored the potential of soft-bodied robots, which we think could be a safer alternative. (For certain tasks, they might even be more effective, since their agility lets them change direction and squeeze into tight spaces.)
How can American workers prepare themselves for careers in technology? How can people already in career paths who have already been working for 10, 20, 30 years keep from getting left behind?
Rapid progress in computing over the past 50 years has made it so that we simply cannot live without computers today. The world is undergoing a big change in the nature of work, and we need to adapt to this change and be tech-literate. Although technology is developing rapidly and the jobs are also changing, the job changes are more gradual. This means we should be concerned about the young generation and make sure that they learn the skills required in the future.
Fortunately, with online education I believe that it’s never been easier to pick up new skills even when you’re a decade or two past your college years. Companies like edX offer many courses to keep you up to speed on emerging technologies and skills. In the last couple years we’ve created several courses aimed directly at working professionals who want to learn the latest skills in big data, cybersecurity and the Internet of Things.
As more jobs require using computers in complex ways, we have to address the knowledge gap in teaching coding. I think that this means not just “learning to code,” but what you might call “coding to learn” – learning these skills not merely as an end in itself, but to look at the world in a different way.
What advancements in robotics do you expect to see in your lifetime?
Robots will be able to help us with physical tasks big and small. I think we’re only a few years away from a world where you will be able to walk into a store and order your own robot for specific tasks around the home or office.
For example, I believe that artificial intelligence will transform transportation profoundly. In the next decade we will see more customized and safer transportation with autonomous and near-autonomous technologies doing a lot of the heavy lifting with getting people and packages around, whether by train, plane or car.
One challenge to making robots commonplace is that they take a long time to design and build. Today’s fabrication processes are slow and costly, with even one small change resulting in days or even weeks of lost time to re-evaluate the designs.
My team is among many around the globe who is working to develop systems that let you design robots more quickly. In our case, we have developed an interface that lets you design a robot in just a few minutes, to be 3D-printed in a matter of hours.
http://danielarus.csail.mit.edu/
Daniela Rus is the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science and Director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT.  Rus’s research interests are in robotics, mobile computing, and data science. Rus is a Class of 2002 MacArthur Fellow, a fellow of ACM, AAAI and IEEE, and a member of the National Academy of Engineering. She earned her PhD in Computer Science from Cornell University. Prior to joining MIT, Rus was a professor in the Computer Science Department at Dartmouth College.
 

Four Questions For: Sebastian Thrun

You’re the founder of Udacity, which aims to democratize education. What changes have you seen in education since founding Udacity, and how do you expect education to transform in your lifetime?
In the future, education will shift from once-in-a-lifetime to lifelong. We are already seeing an increasing number of people demanding new education and new credentials as they walk through life. In the tech space, Udacity has become the go-to place for millions of people, not least because of our very strong ties to the tech industry, who eagerly hires our graduates. I also believe the sky-high tuition fees of existing universities will crumble.
Considering the potential job loss that we will experience as AI and robotics industries progress, how should education change? What jobs should our children prepare for?
Technology is moving faster and faster. People live longer and longer. So this means education has to become lifelong. For our kids, more than any math or language skill will be the skill of learning to learn. The next generation has to make mental growth and lifelong learning a core component of their lives.
How do you believe that AI can positively enrich human life? What is there to fear regarding AI?
AI will make us superhuman. Just as cars have made us superhuman (we can now “run” at 100mph), and phones have made us superhuman (we can now talk with people thousands of miles away), AI will give us superhuman memory, problem solving abilities, and an ability to get things done. 300 years ago, most of us worked in farming, doing the same physical task over and over again. Today, most of us work in offices, doing the same mental task over and over again. AI will do to boring repetitive mental work what the steam engine did to repetitive physical work in the fields.
How do you feel that AI can impact higher education? Will you see this in your lifetime?
At Udacity, we are already using AI and machine learning to maximize the chances of positive learning outcomes. We use AI to analyze individual students, helping our staff to time effective interventions. We use AI to analyze our content, finding any and every opportunity to improve the student learning experience. And of course, Udacity heavily teaches AI. Our students can get a nanodegree certificate in machine learning, or self-driving cars.
sebastian thrun
 
Sebastian Thrun is the CEO of Udacity, a former Google Fellow and VP, and a Research Professor at Stanford University. He has published over 370 scientific papers and 11 books, and he is a member of the National Academy of Engineering in the US. Sebastian works on revolutionizing all of transportation, education, homes, and medical care. Fast Company named Thrun the fifth most creative person in business, and Foreign Policy touted him Global Thinker #4. At Stanford, Sebastian led the Thrun Lab in creating Google Streetview. Then, at Google, Sebastian founded Google X. He leveraged X to launch projects like the self-driving cars, Google Glass, indoor navigation, Google Brain, Project Wing and Project Loon. At Udacity, his vision is to democratize higher education. Udacity stands for “we are audacious, for you, the student”. His team created the notion of “nanodegrees” which empower people from all traits and ages to find employent in the tech industry.

Gigaom Talks with Rodolphe Gelin about Robotics

Rodolphe GelinAs a graduate of the School of Civil Engineering (l’Ecole des Ponts et Chaussées) and with a DEA in Artificial Intelligence, Rodolphe Gelin, EVP Chief Scientific Officer, SoftBank Robotics, has 20 years of experience in research with teams in the Commissariat à l’Energie Atomique (CEA)- most notably in robotics used to assist people. Rodolphe joined SoftBank Robotics in 2008 as Director of Research and Head of Collaborative Projects. He is also the head of the ROMEO project to create a large robot designed to assist the elderly. Rodolphe Gelin now leads the Innovation team which aims to develop new technologies for current robots and continue exploration of humanoid robotics.
Rodolphe Gelin will be speaking at the Gigaom Change Leaders Summit in Austin, September 21-23rd. In anticipation of that, I caught up with him to ask a few questions.
Byron Reese: People have imagined robots for literally thousands of years. What do you think is the source of this fascination?
Rodolphe Gelin: The idea of robots has fed our curiosity for more than half a century with the potential of having another form factor to interact with on our own natural terms. In reality robots offer us much more and, through recent technological innovations, robots are now helping us reach the next frontier in artificial intelligence research and engineering. As mankind has evolved our inherent nature is to create, build and then continue to evolve what we’ve created; advancing in technology is no different. As high tech has become more prolific, we are constantly on a quest to evolve our technical knowledge. Robotics represents the next step or extension of that continued innovation.
 
It is obviously extremely difficult to make a robot with the range of motion and movement as a human. What are some of the things that make it tricky?
Humans have evolved very specific types of muscles which provide us with strength, force and speed that currently no available motor can match. The complexity of the human skeleton gives us extraordinary mobility and support. This type of support is also very difficult to duplicate mechanically. In addition, to creating a general form factor for humanoid robots there are other details to consider like fluidity and other natural movements that require intricate programming, as well as various sensors and processors that would help identify the robot’s surroundings.
 
Do you foresee robots that are indistinguishable from humans? Would people want that?
At SoftBank Robotics, we strongly believe that robots should look like robots. Our robots, Pepper, NAO and Romeo, were created to resemble a human-like figure, but they do not look like us. While there are indeed some robotics scientists today who have created robots that look like humans which include features like eyes, and “skin,” similar to a wax figure. That is not where we are headed with our development of robots. All robotics research is ongoing and as each form factor becomes more advanced some robots could look very much like their human counterparts. However, SoftBank Robotics is focused on creating approachable robots that make people feel comfortable and happy.
Do you think that the computing power of robots will eventually be such that they attain consciousness?
I don’t think that a machine that plays chess will attain consciousness even with a lot of computing power. If consciousness is someday available in a computer, it will be because a human being would have programmed it, in one way or another. A robot is a machine that does what it has been programmed to do. If the programmer does not program a replicated state of ‘consciousness’, there would be no way for the program get one.  Having random ‘conscious-like’ processing could be seen in the form of a computer glitch but the software designer should detect and correct it, if he is not happy with the behavior. And if a developer wants to give a consciousness to his robot, he probably can. But what would be the purpose of it? To give a moral sense to the robot? Do we really want to have a machine judging the morality of what we are doing? I don’t think consciousness is a question of computing power, it is just a question of design.
Thank you for taking the time to share your thoughts on this subject. I look forward to further discussion in September.
Rodolphe Gelin will be speaking on the subject of robotics at Gigaom Change Leaders Summit in Austin, September 21-23rd.

Will the robots take all the jobs?

This article is part of a continuing series leading up to Gigaom Change, which will be held in September in Austin, Texas.
Humans have always had a love/hate relationship with labor saving devices. Generally speaking, the owner of the device loves it and the person put out of work by it hates it. This tension periodically takes the form of violent rejection of industrial technology in all of its forms.
The cotton gin “did the work of twenty men” which meant that after it was installed, one fella loved it, but the nineteen newly-unemployed workers probably shook their fists at the infernal gin, wishing all manner of evil to befall that Eli Whitney troublemaker.
While this “technological unemployment” has been cited as the cause of our economic woes for two centuries, the issue has taken on a new sense of urgency as there has emerged a general fear that the wave of technical innovation we are currently in will capsize the economy and produce a new category of workers: The permanently unemployed.
Is this another example of the “boy who cried ‘no jobs’?” or are we witnessing a true transformation in our economic world?
The question is fundamentally unknowable because it hinges on three independent factors, each of which is also unknowable.
The three factors are:

    1) How many jobs will the robots/AI really take?
    2) How quickly will that happen?
    3) What new jobs will be created along the way?

Let’s dive in:
The tipping point of widespread permanent unemployment is thought by many to be the driverless car taking all the jobs away from the truck drivers:
Self-Driving Trucks Are Going to Hit Us Like a Human-Driven Truck
One Oxford study claims that 47% of US jobs could vanish in 20 years. While consulting giant McKinsey & Company says 45% of all work activities could be automated right now.
But at the same time, there is a chorus of voices urging calm and pointing out that in spite of radical transformations of virtually every industry, the US has maintained near-full employment. How can this be?
Technology has created more jobs than it has destroyed, says 140 years of data
Two interesting questions that need to be addressed when approaching these issues are:

The widespread fear of substantial, permanent joblessness has caused the topic of a universal basic income to move to the mainstream. How would this work?
We talked to five experts about what it would take to actually institute Universal Basic Income
Finally, it may simply be that in a post-scarcity world, “working for a living” just doesn’t have the moral imperative that it used to. We might regard what Buckminster Fuller had to say on the topic:
“We should do away with the absolutely specious notion that everybody has to earn a living. It is a fact today that one in ten thousand of us can make a technological breakthrough capable of supporting all the rest. The youth of today are absolutely right in recognizing this nonsense of earning a living. We keep inventing jobs because of this false idea that everybody has to be employed at some kind of drudgery because, according to Malthusian Darwinian theory he must justify his right to exist. So we have inspectors of inspectors and people making instruments for inspectors to inspect inspectors. The true business of people should be to go back to school and think about whatever it was they were thinking about before somebody came along and told them they had to earn a living.”
Robotics, and its impact on business, will be one of seven topic areas covered at the Gigaom Change Leader’s Summit in September in Austin. Join us.

The Seven Wonders of the Business Tech World

Just over 2000 years ago, Philo of Byzantium sat down and made a list of the seven wonders of the world at that time. Like any such subjective list, it was met with criticism in its own time. The historian Herodotus couldn’t believe the Egyptian Labyrinth was left off and Callimachus argued forcefully for the Ishtar Gate to be included.
At Gigaom Change in September (early adopter pricing still available), we will explore the seven technologies that I think will most affect business in the near future. I would like to list the seven technologies I chose and why I chose them. Would you have picked something different?
Here is my list:
Robots – This one is pretty easy. Even if you make your trade in 1’s and 0’s and never touch an atom, robots will still impact some aspect of your business, even if it is upstream. Additionally, the issue of robots has launched a societal debate about unemployment, minimum wage, basic income, and the role of “working for a living” in the modern world. We have dreamed of robots for eons, feared them for decades, and now we finally get to see what their real effect on humanity will be.
AI – This is also, forgive the pun, a no-brainer. AI is a tricky one though. Some of the smartest people on the planet (Hawking, Gates, Musk) say we should fear it while others, such as the Chief Scientist of Baidu say worrying about AI is like worrying about overpopulation on Mars. Further, the estimates to when we might see an AGI (artificial general intelligence, an AI that can do a wide range of tasks like a human) varies from 5 years to 500 years. Our brains are, arguably, what make us human, and the idea that an artificial brain might be made gets our attention. What effect will this have on the workplace? We will find out.
AR/VR – Although we think of AR/VR as (at first) a consumer technology, the work applications are equally significant. You only have to put on a VR headset for about three minutes to see that some people, maybe a good number, will put this device on and never take it off. But on the work front, it is still an incredibly powerful tool, able to overlay information from the digital world onto the world of atoms. Our brains aren’t quite wired up to imagine this in its full flowering, but we will watch it unfold in the next decade.
Human/Machine Interface – Also bridging the gap between the real world and the virtual one is the whole HMI front. As machines become ever more ubiquitous, our need to seamlessly interface with them grows. HMI is a wide spectrum of technologies: From good UIs to eye-tracking hardware to biological implants, HMI will grow to the point where the place where the human ends and the machine begins will get really blurry.
3D Printing – We call this part of Gigaom Change “3D Printing” but we mean it to include all the new ways we make stuff today. But there isn’t a single term that encapsulates that, so 3D Printing will have to suffice. While most of our first-hand experience with 3D printing is single-color plastic demo pieces, there is an entire industry working on 3D printing new hearts and livers, as well as more mundane items like clothing and food (“Earl Grey, hot”). From a business standpoint, the idea that quantity one has the same unit price as quantity one-thousand is powerful and is something we will see play out sooner than later.
Nanotechnology – I get the most pushback from nano because it seems so far out there. But it really isn’t. By one estimate, there are two thousand nanotech products on the market today. Nano, building things with dimensions of between 1 and 100 nanometers, is already a multi-billion dollar industry. On the consumer side, we will see nano robots that swim around in your blood cleaning up what ails you. But on the business side, we will see a re-thinking of all of the material sciences. The very substances we deal with will change, and we may even be said to be not in the iron nor stone age, but the nano age, where we make materials that were literally impossible to create just a few years ago.
Cybersecurity – This may seem to be the one item that is least like all of the others, for it isn’t a specific technology per se. I included it though because as more of our businesses depend on the technologies that we use, the more our businesses are susceptible to attacks by technology. How do we build in safeguards in a world where most of us don’t really even understand the technologies themselves, let alone, subtle ways that they can be exploited?
Those are my seven technologies that will most effect business. I hope you can come to Austin Sept 21-23 to explore them all with us at the Gigaom Change Leader’s Summit.
Byron Reese
Publisher
Gigaom

Why you can’t program intelligent robots, but you can train them

If it feels like we’re in the midst of robot renaissance right now, perhaps it’s because we are. There is a new crop of robots under development that we’ll soon be able to buy and install in our factories or interact with in our homes. And while they might look like robots past on the outside, their brains are actually much different.

Today’s robots aren’t rigid automatons built by a manufacturer solely to perform a single task faster than, cheaper than and, ideally, without much input from humans. Rather, today’s robots can be remarkably adaptable machines that not only learn from their experiences, but can even be designed to work hand in hand with human colleagues. Commercially available (or soon to be) technologies such as Jibo, Baxter and Amazon Echo are three well-known examples of what’s now possible, but they’re also just the beginning.

Different technological advances have spurred the development of smarter robots depending on where you look, although they all boil down to training. “It’s not that difficult to builtd the body of the robot,” said Eugene Izhikevich, founder and CEO of robotics startup Brain Corporation, “but the reason we don’t have that many robots in our homes taking care of us is it’s very difficult to program the robots.”

Essentially, we want robots that can perform more than one function, or perform one function very well. And it’s difficult to program a robot to do multiple things, or at least the things that users might want, and it’s especially difficult to program to do these things in different settings. My house is different than your house, my factory is different than your factory.

A collection of RoboBrain concepts.

A collection of RoboBrain concepts.

“The ability to handle variations is what enables these robots to go out into the world and actually be useful,” said Ashutosh Saxena, a Stanford University visiting professor and head of the RoboBrain project. (Saxena will be presenting on this topic at Gigaom’s Structure Data conference March 18 and 19 in New York, along with Julie Shah of MIT’s Interactive Robotics Group. Our Structure Intelligence conference, which focuses on the cutting edge in artificial intelligence, takes place in September in San Francisco.)

That’s where training comes into play. In some cases, particularly projects residing within universities and research centers, the internet has arguably been a driving force behind advances in creating robots that learn. That’s the case with RoboBrain, a collaboration among Stanford, Cornell and a few other universities that crawls the web with the goal of building a web-accessible knowledge graph for robots. RoboBrain’s researchers aren’t building robots, but rather a database of sorts (technically, more of a representation of concepts — what an egg looks like, how to make coffee or how to speak to humans, for example) that contains information robots might need in order to function within a home, factory or elsewhere.

RoboBrain encompasses a handful of different projects addressing different contexts and different types of knowledge, and the web provides an endless store of pictures, YouTube videos and other content that can teach RoboBrain what’s what and what’s possible. The “brain” is trained with examples of things it should recognize and tasks it should understand, as well as with reinforcement in the form of thumbs up and down when it posits a fact it has learned.

For example, one of its flagship projects, which Saxena started at Cornell, is called Tell Me Dave. In that project, researchers and crowdsourced helpers across the web train a robot to perform certain tasks by walking it through the necessary steps for tasks such as cooking ramen noodles.  In order for it to complete a task, the robot needs to know quite a bit: what each object it sees in the kitchen is, what functions it performs, how it operates and at which step it’s used in any given process. In the real world, it would need to be able to surface this knowledge upon, presumably, a user request spoken in natural language — “Make me ramen noodles.”

The Tell Me Dave workflow.

The Tell Me Dave workflow.

Multiply that by any number of tasks someone might actually want a robot to perform, and it’s easy to see why RoboBrain exists. Tell Me Dave can only learn so much, but because it’s accessing that collective knowledge base or “brain,” it should theoretically know things it hasn’t specifically trained on. Maybe how to paint a wall, for example, or that it should give human beings in the same room at least 18 inches of clearance.

There are now plenty of other examples of robots learning by example, often in lab environments or, in the case of some recent DARPA research using the aforementioned Baxter robot, watching YouTube videos about cooking (pictured above).

Advances in deep learning — the artificial intelligence technique du jour for machine-perception tasks such as computer vision, speech recognition and language understanding — also stand to expedite the training of robots. Deep learning algorithms trained on publicly available images, video and other media content can help robots recognize the objects they’re seeing or the words they’re hearing; Saxena said RoboBrain uses deep learning to train robots on proper techniques for moving and grasping objects.

The Brain Corporation platform.

The Brain Corporation platform.

However, there’s a different school of thought that says robots needn’t necessarily be as smart as RoboBrain wants to make them, so long as they can at least be trained to know right from wrong. That’s what Izhikevich and his aforementioned startup, Brain Corporation, are out to prove. It has built a specialized hardware and software platform, based on the idea of spiking neurons, that Izhikevich says can go inside any robot and “you can train your robot on different behaviors like you can train an animal.”

That is to say, for example, that a vacuum robot powered by the company’s operating system (called BrainOS) won’t be able to recognize that a cat is a cat, but it will be able to learn from its training that that object — whatever it is — is something to avoid while vacuuming. Conceivably, as long as they’re trained well enough on what’s normal in a given situation or what’s off limits, BrainOS-powered robots could be trained to follow certain objects or detect new objects or do lots of other things.

If there’s one big challenge to the notion of training robots versus just programming them, it’s that consumers or companies that use the robots will probably have to do a little work themselves. Izhikevich noted that the easiest model might be for BrainOS robots to be trained in the lab, and then have that knowledge turned into code that’s preinstalled in commercial versions. But if users want to personalize robots for their specific environments or uses, they’re probably going to have to train it.

Part of the training process with Canary. The next step is telling the camera what its seeing.

Part of the training process with Canary. The next step is telling the camera what it’s seeing.

As the internet of things and smart devices, in general, catch on, consumers are already getting used the idea — sometimes begrudgingly. Even when it’s something as simple as pressing a few buttons in an app, like training a Nest thermostat or a Canary security camera, training our devices can get tiresome. Even those of us who understand how the algorithms work can get get annoyed.

“For most applications, I don’t think consumers want to do anything,” Izhikevich said. “You want to press the ‘on’ button and the robot does everything autonomously.”

But maybe three years from now, by which time Izhikevich predicts robots powered by Brain Corporation’s platform will be commercially available, consumers will have accepted one inherent tradeoff in this new era of artificial intelligence — that smart machines are, to use Izhikevich’s comparison, kind of like animals. Specifically, dogs: They can all bark and lick, but turning them into seeing eye dogs or K-9 cops, much less Lassie, is going to take a little work.

How NASA uses quantum computing for space travel and robotics

Quantum computing is still in its infancy, even though the idea of a quantum computer was developed some thirty years ago. But there are a whole load of pioneering organizations (like Google) that are exploring how this potentially revolutionary technology could help them solve complex problems that modern-day computers just aren’t capable of doing at any useful speed.

One such organization is NASA, whose use of D-Wave Systems quantum computing machines is helping it research better and safer methods of space travel, air traffic controls and missions involving sending robots to far-off places, explained Davide Venturelli, a science operations manager at NASA Ames Research Center, Universities Space Research Association. I’ll be speaking with Venturelli on stage at Structure Data 2015 from March 18-19 in New York City and we’ll be sure to cover how NASA envisions the future of quantum computing.

The basic idea of quantum computing is that quantum bits, or qubits — which can exist in more than two states and be represented as both a 0 and 1 simultaneously — can be used to greatly boost computing power compared to even today’s most powerful super computers. This contrasts with the modern-day binary computing model, in which the many transistors contained in silicon chips can be either switched on or off and can thus only exist in two states, expressed as a 0 or 1.

With the development of D-Wave Systems machines that have quantum computing capabilities (although researchers argue they are not true quantum computers along the lines of the ones dreamed up on pen and paper in the early 1980s), scientists and engineers can now attempt to solve much more complex tasks without having to perform the type of experiments used to generate quantum phenomena, explained Venturelli. However, these machines are just the tip of the quantum iceberg, and Venturelli still pays attention to ground-breaking research that may lead to better quantum devices.

NASA hopes to use the machines to solve optimization problems, which in its most basic terms means finding the best solution out of many solutions. One such example of an optimization problem NASA has focussed on deals with air-traffic management in which scientists try to “optimize the routes” of planes in order to “make sure the landing and taking off of airplanes in terminals are as efficient as possible,” said Venturelli. If the scientists are able to route air traffic in the best possible way, there’s a good chance they can reduce the dangers of congested skies.

Davide Venturelli

Davide Venturelli

NASA also wants to use quantum computing to help with automated planning and scheduling, a subset of artificial intelligence that NASA uses to plan out robotic missions to other planets. NASA typically plans out these type of endeavors ten years in advance, said Venturelli.

The goal is to plan out the mission of the robots far in advance because realtime communication with the robots just isn’t feasible given how far away other planets are from the Earth. Using quantum optimization, NASA scientists will have new tools to basically forecast what may occur during the mission and what would be the best possible plan of attack for the robots to do their work.

“We have some missions where we imagine sending multiple robots to planets and these robots will need to coordinate and will need to do operations like landing and such without realtime communication,” said Venturelli.

Scientists need to “maximize the lifetime of the batteries” used by the robots as they perform tasks on the planets that may include drilling or using infrared thermometers to record temperatures, so careful planning of how the robots do their tasks is needed in order to ensure that no time is wasted. This all involves a lot of variables that normal computers just aren’t up-to-speed to process and could be a fit for quantum computing.

“[The robot] has to figure out what is the best schedule and figure out if he can recharge and when to go in a region where it is dark and a region where there is water,” said Venturelli. “We need to preplan the mission.”