Book Image

The Infinite Retina

By : Irena Cronin, Robert Scoble
Book Image

The Infinite Retina

By: Irena Cronin, Robert Scoble

Overview of this book

What is Spatial Computing and why is everyone from Tesla, Apple, and Facebook investing heavily in it? In The Infinite Retina, authors Irena Cronin and Robert Scoble attempt to answer that question by helping you understand where Spatial Computing?an augmented reality where humans and machines can interact in a physical space?came from, where it's going, and why it's so fundamentally different from the computers or mobile phones that came before. They present seven visions of the future and the industry verticals in which Spatial Computing has the most influence?Transportation; Technology, Media, and Telecommunications; Manufacturing; Retail; Healthcare; Finance; and Education. The book also shares insights about the past, present, and future from leading experts an other industry veterans and innovators, including Sebastian Thrun, Ken Bretschneider, and Hugo Swart. They dive into what they think will happen in Spatial Computing in the near and medium term, and also explore what it could mean for humanity in the long term. The Infinite Retina then leaves it up to you to decide whether Spatial Computing is truly where the future of technology is heading or whether it's just an exciting, but passing, phase.
Table of Contents (6 chapters)

Spatial Computing – The New Paradigm

Spatial Computing comprises all software and hardware technologies that enable humans, virtual beings, or robots to move through real or virtual worlds, and includes Artificial Intelligence, Computer Vision, Augmented Reality (AR), VR, Sensor Technology, and Automated Vehicles.

Seven industry verticals will see transformational change due to Spatial Computing: Transportation; Technology, Media, and Telecommunications (TMT); Manufacturing; Retail; Healthcare; Finance; and Education.

These changes are what is driving strategy at many tech companies and the spending of billions of dollars of R&D investment. Already, products such as Microsoft's HoloLens AR headset have seen adoption in places from surgery rooms to military battlefields. Devices like this show this new computing paradigm, albeit in a package that's currently a little too bulky and expensive for more than a few of the hardiest early adopters. However, these early devices are what got us to be most excited by a future that will be here soon.

Our first experiences with HoloLens, and other devices like it, showed us such a fantastic world that we can predict that when this new world arrives in full, it will be far more important to human beings than the iPhone was.

We were shown virtual giant monsters crawling on skyscrapers by Metaio years ago near its Munich headquarters. As we stood in the snow, aiming a webcam tethered to a laptop at the building next door, the real building came alive thanks to radical new technology. What we saw at Metaio had a similar effect on Apple's CEO, Tim Cook. Soon, Apple had acquired Metaio and started down a path of developing Augmented Reality and including new sensors in its products, and the capabilities of this are just starting to be explored. Today's phones have cameras, processors, small 3D sensors, and connectivity far better than that early prototype had, and tomorrow's phones and, soon, the glasses we wear will make today's phones seem similarly quaint.

In Israel, we saw new autonomous drones flying over the headquarters of Airobotics. These drones were designed to have no human hands touching them. Robots even changed memory cards and batteries. New Spatial Computing technology enabled both to "see" each other and sense the world around us in new ways. The drones were designed to fly along oil pipelines looking for problems, and others could fly around facilities that needed to be watched. Flying along fences and around parking lots, their Artificial Intelligence could identify things that would present security or other risks. These drones fly day or night and never complain or call in sick.

Focusing on just the technology, though, would have us miss what really is going to happen to the world because of these technologies. Our cities and countryside will reconfigure due to automation in transportation and supply chains as robot tech drives our cars, trucks, and robots rolling down sidewalks delivering products. We'll spend more time in virtual worlds and metaverses. More of our interfaces, whether they are the knobs on our watches, cars, doors, and other devices, will increasingly be virtualized. In fact, many things that used to be physical may be virtualized, including stores and educational lessons from chemistry experiments to dissection labs.

Computing will be everywhere, always listening, always ready to talk back, and once we start wearing Spatial Computing glasses, visual computing will always be there ready to show us visualizations of everything from your new designs to human patterns in stores, on streets, and in factories and offices. Some call this "invisible computing" or "ambient computing," but to us, these systems that use your eyes, voice, hands, and even your body as a "controller" are part of Spatial Computing.

At the same time, all this new computing is joined by radically fast new wireless technology in the form of 5G. The promises of 5G are threefold. First, we'll have more than a gigabit of data if we have the highest bitrates, and even the lowest rates promise to give us more bandwidth than current LTE phones. Second, wireless will soon add almost no latency, which means that as soon as you do something, like throw a football in a virtual game, it'll happen, even if you have many players viewing the football in real time. Third, 5G supports many more devices per tower, which means you will be able to live stream even a Taylor Swift concert with tens of thousands of fans filling a stadium.

When you combine 5G with all the new things under the Spatial Computing umbrella, you get a big bang. All of a sudden, cars can work together, one sending detailed 3D imaging of streets instantly to others behind it. New kinds of virtual games will be possible in the streets, where hundreds of people can play virtual football games in parks and other places. Crazy new virtual shopping malls appear where virtual celebrities show you around, and you can imagine new products directly in a virtual scan of your home and in other places as well.

A range of new capabilities will appear over the next few years in devices you wear over your eyes. There will be very light ones that are optimized for utility, showing you navigation, notifications, reminding you where you have left things, or nagging you to do some exercise or meditation to keep on top of your physical and mental health. There also will be heavier devices that will be more optimized for everything from detailed design or architecture work to entertainment and video game work. We can even imagine owning several different kinds of Spatial Computing devices, along with some smart contact lenses, that will let us go out on a date night without looking like we have any computing devices on at all.

As the 2020s dawn, we have VR devices that cost a few hundred dollars that are great for games and a few other things, like corporate training. On the more expensive side of the scale, we have devices that can be used by car designers, or even as flight simulators to train airline pilots. The expensive ones, though, will soon look as out of date as one of the first cell phones does today. By 2025, the computing inside will shrink to a fraction of the size of today's devices and the screens inside will be much sharper and capable of presenting virtual and augmented worlds to us that far exceed what we can experience today.

It is this next wave of devices that will usher in the paradigm shift in computing and in human living that we are discussing here. Already these changes are benefiting many enterprises, raising productivity. Inside many warehouses, hundreds of thousands of robots scurry about, moving products from trucks to packages. These new warehouses have evolved over the past decade and enable retailers to keep up with floods of new online orders that, back in 2000, were only dreamt of in futuristic books like this one.

The productivity gains will spread to many jobs. At Cleveland Clinic, surgeons are already using similar technology that shows them digital views from ultrasound, CAT scans, and other sensors. Like the warehouse worker who sees a blue line on the floor telling her how to find the product she's looking for, in this case, when a surgeon navigates to the right place to cut out a cancerous tumor, it lights up like a missile guidance system and tells the surgeon they are in the right place.

Other systems help workers "phone a friend" with new remote assistance features. This can help companies that have expensive machinery, or other work forces, including surgeons, architects, and engineers, save money. At some plants, the savings will be substantial. It took us 30 minutes to simply walk across the Boeing floor where it builds airliners. Asking someone for advice virtually might save someone an hour of walking just to come over and see your problem in a plant like that.

New devices let these remote helpers see what you are dealing with, and they can often show you visually what to do. Imagine trying to remove an engine while holding a phone or tablet in your hand. These systems, because they use wearable glasses, can let workers use both of their hands while talking and showing the remote assistant what is happening. The savings in downtime can be extreme. Imagine a problem that is causing a shutdown in a line at Ford. Every minute it is down, Ford loses about $50,000.

Even for salespeople and managers, the cost savings add up. A flight from San Francisco to Los Angeles usually costs about $700, including airfare, hotel, a decent meal, and an Uber/taxi ride or two. Increasingly, these meetings will be replaced by ones held in Virtual Reality. Two headsets are $800. In just a couple of virtual meetings, the headsets pay for themselves. Social VR from Facebook, Spatial, Microsoft, and others are rapidly improving to make these virtual meetings almost as good as physical ones. The time and cost saved will add up to big numbers, and workers will be happier. Workforces will be less likely to pick up viruses from travelers, too, which will also add up to big savings for corporations and reduced risks.

More lives could be saved, too. Mercedes-Benz had an Augmented Reality system built to show first responders how to cut apart a wrecked car. The app showed where fuel and electrical lines were so that firefighters working to free an accident victim wouldn't start a fire or electrocute themselves. This isn't the only example we have of this technology helping first responders, either through better training or by giving them various assistance on scene. One such system helps police gather evidence and then be able to recreate crime scenes for juries that they can virtually walk around.

Here, we've given you a taste of just how much the world is about to get reconfigured because of Spatial Computing technologies. Let's dig into the details of the chapters ahead.

Exploring Technological Change

Spatial Computing's technological change is laid out in Chapter 1, Prime Directive. Mobile phones soon will give way to headsets and glasses that bring computing to every surface. What is driving all of this new technology? We have a need for complex technologies to keep us around on this planet longer and in a more satisfied and productive state. What will drive us to build or buy new headsets, sensors, and vehicles, along with the connected systems controlled by Artificial Intelligence? Augmentation is coming, and that can mean a lot of different things, which we will explore.

We look back in Chapter 2, Four Paradigms and Six Technologies, at the previous three foundations of personal computing and include the new Spatial Computing paradigm. The six technologies discussed are those that enable Spatial Computing to work: Optics and Displays, Wireless and Communications, Control Mechanisms (Voice and Hands), Sensors and Mapping, Compute Architectures (new kinds of Cloud Computing, for instance), and Artificial Intelligence (Decision Systems).

It all started with the personal computer of the late 1970s. That paradigm shift was followed by graphical user interfaces and networking in the 1980s and the mobile phone and other devices that started arriving in the 1990s, culminating with the iPhone in 2007. Then, we look forward to the next paradigm and why it will be so different and why so many more people will be able to get more out of Spatial Computing than the laptops, desktops, and smartphones that came before.

Human/machine interfaces are radically changing, and we visit the labs that brought us the mouse to understand the differences between how humans interfaced with computers with keyboard and mice to how we'll interface with cloud computing that is hyper-connected by using voice, eyes, hands, and other methods, including even wearing suits with sensors all along our bodies. It's amazing to see how far we've come from the Apple II days, where there were very few graphics, to Spatial Computing where cameras see the real world, decipher it, and decide how to drive a car around in it.

Transformation Is Coming

Because of this new technology, cities and even the countryside will change, autonomous vehicle pioneers tell us in Chapter 3, Vision One – Transportation Automates. Soon you will tell your glasses, "Hey, I need a ride," and you'll see your ride arrive. Sounds like Uber or Lyft, right? Look closer, there isn't a driver inside. Now think of the cost and the other advantages of that. Economists see that such a system could be a fraction of the cost, and could do many other things as well: "Hey car, can you go pick up my laundry and then dinner for our family?" The problem with such a world is that it is probable, many tell us, that we'll see much more traffic near cities as we use transportation to do new things, like pick up our laundry. This is why Elon Musk came up with the Boring Company to build tunnels under cities. We show some other solutions pioneers have come up with, including special roads for these vehicles and new kinds of flying vehicles that will whisk commuters into city centers, passing above all that new traffic.

Transportation soon will include more than just cars and trucks, too. Already, lots of companies are experimenting with new kinds of robots that will deliver products much more efficiently and, in a world where viruses are a new threat, without human hands touching them either. We talk with a company that is already rolling out such robots on college campuses and elsewhere.

Autonomous cars might look like they are rolling around the real world, but often they are developed by rolling them around inside a simulation. Simulations are how engineers are able to test out AI systems and come up with new ways to train the AIs. After all, you don't want to wreck 500 cars just to come up with how to handle someone running a red light, do you? If you walk around the simulations built by Nvidia and others, they look like real streets, with real-looking and acting traffic, pedestrians, and even rainwater and puddles after rain. This technology has many new uses other than training robots and autonomous vehicles, though. The technology inside is a radically different form of computing than was used to make Microsoft Windows for the past few decades, too.

Here, new AI systems fuse dozens of sensor and camera readings together and then look for patterns inside. Some of the cars rolling around Silicon Valley and other cities, like Phoenix, Arizona, have more than 20 cameras, along with half a dozen spinning laser sensors that see the world in very high-definition 3D.

Is that a stop sign or a yield sign? Humans are good at that kind of pattern recognition, but computers needed to evolve to do it, and we dig into how these systems work for autonomous cars and what else this kind of technology could be used for―maybe for playing new kinds of games or visiting new kinds of virtual amusement parks where virtual actors interact with you? How will such things be possible? Well, let's start with the huge amount of bandwidth that will soon appear as 5G rolls out and new devices show up on our faces and in our pockets to connect us to these new Spatial Computing systems. Yes, 5G can support these new kinds of games, but it also can tell cars behind you that there's a new hazard in the road that needs to be avoided.

New Vision

Games aren't the only things that better devices will bring. Chapter 4, Vision Two – Virtual Worlds Appear, provides details on Technology, Media, and Telecommunications, another of our seven industry verticals to be disrupted. We start out by detailing the different kinds of devices that are available to bring a spectrum of Spatial Computing capabilities to your face, from Virtual and Augmented Reality headsets to lightweight smart information glasses, and even contact lenses with displays so small that it will be very hard to tell that your friend is wearing one.

There are pretty profound trade-offs made as manufacturers bring devices to the market. VR headsets emphasize immersion, or the feeling you get when you see something beautiful wrapped all around you. Augmented Reality headsets focus on the virtual layer that they reveal on top of the real world. Often it's amazing and magical, albeit usually with less of that "I'm in a dark movie theater with a huge screen" feeling. Then there are a few other devices that focus mostly on being lightweight, bringing navigation and notification-style functionality. Our guide isn't designed to be comprehensive, but rather to you to understand the market choices that both businesses and consumers will have to soon make.

While cataloging the device categories, we also show some of the new entertainment capabilities that soon will come, which will be captured with new arrays of volumetric and light-field cameras. We visited several such studios and delivered you into a new entertainment world, one where you can walk around in, and interact with, objects and the virtual beings inside.

These new media and entertainment experiences are arriving with a bundle of novel technologies, from AR Clouds, which contain both 3D scans of the real world and tons of virtual things that could be placed on top, to complete metaverses where users can do everything from build new fun cities to play new kinds of games with their friends. In enterprises, they are already building a form of an AR Cloud, called a "Digital Twin," which is changing a lot about how employees are trained, work together, and manage new kinds of factories.

We have visited the world's top manufacturing plants, and in many of them, we see new kinds of work being done with lots of robots that didn't exist just a few years ago, with workers walking around wearing new devices on their faces helping them learn or perform various jobs. In Chapter 5, Vision Three – Augmented Manufacturing, you'll learn about how Spatial Computing is changing how factories are even designed. Increasingly, these factory floors are using robots. The robots are different than they used to be, too. The older ones used to be kept in cages designed to keep humans away. Those can still be found welding, or like in Ford's Detroit factory, putting windshields into trucks. Newer robots work outside cages and sometimes, can even touch humans. These types of robots are called "cobots" because they cohabit with humans and can greatly assist workers.

In the upcoming years, humans will both be trained to work with these new robots using new VR and AR technologies as well as train the robots themselves in new headsets with new user interfaces that let humans virtually control factory floors. As these new Spatial Computing technologies are increasingly used on factory floors, they bring new capabilities, from virtual interfaces to physical machines and new productivity enhancers. For instance, in many of these systems, workers can leave videos, 3D drawings, and other scans, and other notes for workers on the next shift to see. "Hey Joe, the cutting machine is starting to misbehave. I ordered a new motor for it so you can fix it when the line is down at 2 p.m."

Pervasive Change – Shopping, Healthcare, and Finance

The products that Joe makes will eventually be found in retail stores, many of which will be quite different than the ones that Sears ran decades ago. Walk into an Amazon Go store and look at the ceiling. You'll see hundreds of cameras and sensors aimed back at you, watching and categorizing every move you make. In the Go store, they charge you for the products you take without you having to talk to a checkout clerk or pull out a credit card, or really do anything other than walk out of the store. In Chapter 6, Vision Four – Robot Consumers, we detail these changes, along with others that make retail stores, even traditional ones, more efficient and better for both consumers and sales for producers, and useful new Augmented Reality technologies that make shopping at home much easier.

As we were finishing up this book, Apple announced a new iPad with a 3D sensor, and it demonstrated why people should buy one of these new iPads by showing off one of these futuristic shop-at-home experiences where the person holding the tablet could drop chairs and other items into their real home to see how they fit. The changes, though, don't stop at the shopping experience. Behind the scenes, these technologies are making things more efficient, helping logistics companies pick and pack products faster and getting them to stores faster and with fewer losses, and we detail how Spatial Computing is making a difference there.

Similar changes are underway in healthcare. They are so profound that Dr. Brennan Spiegel, Director of Health Research at Cedars-Sinai, says that we should expect a new kind of healthcare worker: "The Virtualist." This new kind of practitioner will perform several roles: help patients prevent disease, help doctors deliver a new form of healthcare, and help the nurses, doctors, and other staff perform their jobs more efficiently using new Spatial Computing technology. For instance, let's say you need surgery. Well, a new surgery team at Cleveland Clinic is already using Microsoft HoloLens 2 headsets to see inside you, thanks to images from scanners being visualized inside the surgeon's headgear. We discuss, in Chapter 7, Vision Five – Virtual Healthcare, how its system guides the surgeon to the right spot to cut out a patient's cancerous tumor.

That's just a tiny piece of what's happening in healthcare due to Spatial Computing technology. On another wing of the hospital, doctors are using VR to address mental illnesses and ailments from PTSD to dementia, with more applications on the way. At the University of Washington, they discovered it often is much better at treating pain than using opiates, which are much more dangerous, killing tens of thousands of Americans every year due to addiction. In other places, nurses noticed that patients going through tough procedures preceding childbirth felt a lot less pain, too, if they were watching a 360-video experience during the process. Other doctors even found ways to enhance athletes' perception. These brain tricks and virtual remedies have the capacity to significantly change healthcare. Pfizer's head of innovation told us that she views Augmented and Virtual Reality as the future of medicine.

How does all this work? Largely on data. New predictive systems will watch your health by having sensors look into your eyes, watch your vascular system, or blood streams, and sense other things, too, maybe to the point where they see that you are eating too much sugar or smoking too many cigarettes. One could even warn your doctor that you aren't taking your medicine or performing the exercises that she prescribed. Are we ready for new tough conversations with our doctors? Remember, in this future, maybe your doctor is a virtual being from whom you don't mind hearing the harsh truth. One of the studies we found showed that patients actually are much more honest about their mental health problems when talking with a virtual being, or in a chat that's run by Artificial Intelligence.

These AIs won't just be helping us keep our health on track, either. Similar systems might let us know about market changes that we need to pay attention to or, like banks already do when they notice that buying behavior is out of bounds, warn you about other things. The financial industry is generally a relatively conservative one, so adoption of Spatial Computing technologies there is contingent upon demonstrated and clear utility, and they must be additive to the bottom line. Currently, there is very little Spatial Computing that is being actively used there; however, the possibilities are very promising. In Chapter 8, Vision Six – Virtual Trading and Banking, we cover the future uses for Spatial Computing in the financial industry.

In this chapter, we review the functional areas where we think Spatial Computing will have its greatest impact, including 3D data visualization, virtual trading, ATM security and facial payment machines, and virtual branch functionality and customer service.

Someday soon, we may never go into a physical bank again due to these changes, but could the same happen with our schools? Already, teachers are using Augmented and Virtual Reality to teach all sorts of lessons, from chemistry experiments to math visualizations, to even virtual dissections of real-looking animals.

New Ways to Learn

COVID-19, though, showed us that sometimes we might need to rely completely on teaching virtually, and in Chapter 9, Vision Seven – Real-Time Learning, we talk with educators and others who are using technology aggressively to make learning more virtual. It isn't only for kids, either. Soon, because of automation, we'll need to retrain millions of adults around the world, and schools and universities are responding with new curricula, new learning programs for Virtual and Augmented Reality, and new support systems to enable even truck drivers to change careers. Speaking of careers, already at companies like Caterpillar, it is using Augmented Reality glasses to train workers to fix their expensive tractors in real time. Many new VR-based training systems are being developed, from simulators to help police learn how to deal with terrorist situations to ones that show quarterbacks how to perform better, to training at Walmart that shows retail workers how to manage stores better. Verizon even trained its retail store workers on what to do if they are being robbed using VR-based training. What if, though, the system could do even more, we asked, and predict what we might do next and assist us with that?

How can computers predict our next move? Well, truth be told, we are somewhat predictable. We buy groceries at the same store every week, visit the same gas stations, go to the same churches, schools, offices, movie theaters, laundries, and head home at pretty much the same time every evening.

Watching our friends, we can usually predict what they will order from menus or how they will complain when we try to get them off pattern. Ever try to take someone who prefers steak and potatoes to a sushi restaurant ? Can't we predict that they will have the same preferences tomorrow night? Yes, and so can computer algorithms, but Spatial Computing systems could soon know a lot more about us than even our best friends do, since they could watch every interaction, every product touched, every music song picked, and every movie watched. In Chapter 10, The Always Predicted World, we show how that data will be used in each of our seven disruptable industries to serve users in radically new ways.

Meeting the Pioneers

We predict that you will enjoy learning more about the seven people who are pushing Spatial Computing further in Chapter 11, Spatial Computing World-Makers. Instead of ratcheting through a list, we'd like to tell you why we picked the people here. One spends time in retail stores the world over and uses technology to help them become not only more profitable, but more customer-centric. Another developed Google's autonomous vehicle technology and has gone on to further build out a huge vision of the future of transportation. You'll meet one leader who, from their perch at Qualcomm, sees literally every new product coming before the rest of us do. Also on the list are a couple of investors, one East Coast, one West, who are pouring resources into entrepreneurs who are bringing us the future of virtual beings, robots and the AI that runs them. Finally, we have a doctor who is pushing the healthcare system forward into a world of Augmented and Virtual Reality and a successful innovator who builds companies that have immersiveness and VR at their core. We picked them out of the thousands that we've studied because they represent a guiding hand that will bring "superpowers" to us all.

Thinking Ahead

With these new superpower-like capabilities come responsibilities that the companies and organizations who create and use Spatial Computing owe to the people who use their technologies. New technologies bring with them new ways of doing things, and the more significant the change, the more unchartered and unknown are the ramifications that occur as a result of their use. In Chapter 12, How Human?, we provide a philosophical framework put forth by L.A. Paul of Yale University in her book, Transformative Experience, that explains why human beings tend to have cognitive issues with radical new technologies. We then discuss recent issues regarding privacy, security, identity, and ownership, and how they relate to Spatial Computing. Finally, we take up how Spatial Computing technologies can be utilized to bring about human social good.

Starting the Journey

We wrote The Infinite Retina for a wide audience of non-technical people, rather than for engineers. By the end of the book, you should understand the technologies, companies, and people who are changing computing from being something you do while sitting and facing a flat computer screen or while holding a phone, to computing that you could move through three-dimensionally. We focus on how Spatial Computing could be used by enterprises and effectively radically change the way human beings learn from information and visuals and understand their world. We very strongly believe that enterprise use of Spatial Computing will lead to massive consumer use, and we are excited to share our learning in this book with you.