The Engineer’s guide to the future

Andy O'Sullivan
15 min readJul 17, 2018

--

This post is a vision for the near to medium future, aimed at engineers — or developers, coders, designers, or whatever else you want to call people who use computers to make new and amazing things. The creators. It’s also aimed at anyone else interested in the future and what is coming, and what they can do to prepare for it.

So basically, this is for everyone!

My full-time job is innovation — exploring new and emerging technology to see how it can be of use. Can it bring business value, can it improve customer satisfaction, can it make customers’ lives safer and more secure, can it make money or save money? Part of this is looking into the future and trying to predict what is going to happen, or at least, what could happen.

I believe that we need to keep up with technology to keep learning — new languages, new interfaces, new platforms, new user habits, new designs — or we will be left behind. We can’t assume that what we’re doing now will be relevant in the future. So here’s a few thoughts on the future, and what I think that engineers (and everyone else!) should do to prepare themselves.

The Future is Augmented

Augmented Reality (AR), also known as Mixed Reality (MR) or even more trendily, Extended Reality (XR), is approaching blockchain levels of hype.

With the advent of ARKit and ARCore, augmented reality apps are now a lot easier to make. There are a lot of people trying to sell them as amazingly groundbreaking, with the ability to change retail, health, finance, whatever!

I love AR and have been experimenting with it for a while now. From building the mandatory portal apps, to building an app to map out floor plans, to making a crazy game for a kids’ coding event (with tutorial and open-source code), I’ve learned firsthand how to build AR experiences and how fun they can be.

screenshot from AR Madness. Tutorial here!

They aren’t going to change the world, though — as currently, the only way to view them is via your smartphone’s camera view. No matter how cool something looks floating in the real world, if you’re holding your phone up to see it, it’s not as immersive as it could be and probably not as convenient as a normal app is.

However, AR headsets (or AR “glasses”) — devices worn over the eyes through which you can see the real world as normal, but also see virtual objects placed there— are the future. I believe that if they are done cheaply and conveniently enough, they will not necessarily replace the smartphone, but will provide a better user interface that will bring a massive technological and social change.

The current state of the art option is Microsoft HoloLens, while the Meta 2 are cool as well. But they’re too big and too expensive. Still, there are more and more commercial and industrial use-cases being highlighted, like the one below from Microsoft, where HoloLens is being used to design trucks. But they are nowhere near being ready for mass-market consumer use.

MicroSoft HoloLens: source

I imagine a future, maybe not too far away, where Apple brings out iGlasses, stylishly white (or steel, or rose! burnished steel rose!) glasses that can display content around you, easily and seamlessly.

Or perhaps it won’t be Apple, or Google or Samsung — but a new player in the market, that will bring out the killer Walkman / iPhone of AR glasses. It doesn’t matter who does it. What does matter is that I believe AR glasses will become mass market ready. And when they do, it will be as big a change as Radio was to TV, or the Smartphone was to TV.

Magic Leap is at the top of the AR hype curve, with their grandiose proclamations of “bringing magic back into the world” and their rather awesome promotional images:

It’s fake but I’d buy it. source: Magic Leap

Their latest demo doesn’t match these aspirational heights, but shows what is coming:

What they are trying to achieve will eventually happen. We will be able to view any content we wish — movies, pictures, email, video-calls, fairies, dragons, whatever — in the air in front of and around us, allowing us to see content not just on the little black screen on an iPhone or Android.

This change will be profound, offering opportunities to show content to users in a way they haven’t experienced it before.

Imagine designing a webpage or app that isn’t restricted to the size of a computer screen or a smartphone, but only to the space around the user.

Imagine being able to show three dimensional content at varying distances from the user, that they can walk through and around, and that they can interact with. Virtual Reality can transport users to new worlds, but AR glasses could allow you to stay in the real world but mix it in with fun, useful content.

I was working as a software engineer when the first iPhone came out, but it didn’t occur to me to try making any apps for it. Of course I do so now — but looking back, it would have been great to have been amongst the first app developers.

So what can engineers do to get AR ready and take advantage of new opportunities when they arrive?

  • Learn the concepts of AR now — how it works, what it’s currently capable of, what new capabilities are being developed. Think about what user interfaces could be like with AR — how would users interact with it using voice, gaze, virtual touch, proximity and more.
  • Learn about 3D models — formats, where to get them, how to make them. If you have time, learn how to use Blender.
  • Learn Unity. If any tech is likely to be of use for AR in the future, it’s Unity. Already heavily used for 3D games and VR, it comes with plugins for Apple’s ARKit. More info here.
  • Learn ARKit itself — if you, like me, prefer native programming, learn Apple’s awesome AR SDK, currently with version 2 in beta. Info here.
  • Learn ARCore — the Android AR SDK, something I plan on doing myself when I manage to find some spare time from somewhere! Info here.
  • If you can, try the current headsets — HoloLens, Moveria, Meta, and when it’s finally out, the Magic Leap headset. Magic Leap already has SDKs available. And who knows, maybe they aren’t lying about the magic! Info here.

Moving on, let’s talk about AI and friends…

Artificial Intelligence / Machine Learning / Deep Learning

If AR is hyped, AI is basically the buzzword of the century. Lots of people aren’t really sure what it means, but they know it’s important and that their business needs it.

The first thing to know is that modern day Artificial Intelligence doesn’t actually mean a computer being intelligent — it’s basically a catch-all term for computer programs that can “learn”, to improve their operational efficiency or their success. Even at that, lots of applications that say they use AI actually don’t. A chatbot that has a big decision tree in the background isn’t AI, it’s just a big decision tree.

If you ask “What is Ragnarok?” and get back the answer “It is simultaneously a great action movie and the ruin of a good character” — it’s probably not artificial intelligence, just quite wise.

However, there is plenty of amazing work being done with proper AI and Machine Learning, for a whole heap of use-cases. We don’t need a crystal ball to say that knowing about AI will be beneficial for a future engineering career. Similar to Apple and Google releasing tools to “democratise” Augmented Reality development, each year there are more tools available to enable developers to build AI solutions, even if they don’t fully understand the inner workings.

I can neither confirm nor deny that I have one of these on my desk

Above is picture of the AWS DeepLens, a recent camera device from Amazon which allows you to run “deep learning” models such as facial recognition, activity recognition, or, em — hot dog recognition — locally on the camera, removing the need to run the models in the cloud. At about $240, it’s cheap and (relatively) easy to setup and use.

A bit like the AR headsets, it’s the first iteration in what’s bound to be smaller and cheaper devices, and comes with the benefit of being already hooked up to AWS and its plethora of services.

Some use cases you may want to explore other than identifying hotdogs?

  • Home security
  • Commercial security
  • Sentiment detection
  • Doing something about that dog that goes on the green beside your house when its owners thinks no one is awake to see them letting it out. Ok … that might just be me …
  • and whatever else you can do with a cheap device that can run machine learning models locally.

This is just one example of how you can start experimenting with AI. The field is so big (as pretty much any problem could have a solution that involves some AI) that it’s more about identifying where AI could add value, and then learning more about how to apply it to that specific domain.

Voice

Voice interfaces (think Alexa, Google Home, Siri … ok, only kidding about Siri!) will be huge, if they already aren’t. As more devices become voice capable — watches, fridges, cars, whatever, more and more people will consume services via voice interfaces. This raises a whole lot of interesting questions for engineers and designers, such as:

  • All I know about is React Native, can I use that to speak to my users?
  • I’ve got a killer respsonsive template that we can — oh, wait, there’s no screen?

It’s further complicated that the voice interface may also have a screen, and/or other channels! Anyone who’s built an Alexa Skill (the Echo version of an app) has got a head-start here. If you haven’t, there’s a ton of resources online, including:

  • My recent post on how to make an Alexa Skill on your teabreak, with node.js.

and a few posts by my colleague Gillian Armstrong, a leading voice in chatbots and conversational interfaces:

So what can you do?

  • Take a look at the cloud vendors (Google, AWS, IBM etc) and see what AI/machine learning services they have, and try them out.
  • Try out the open-source tools, like TensorFlow.
  • There’s a heap of online courses, find a good one and get learning. Here’s a post from freeCodeCamp last year with more info.

Automation

Another side to “Artificial Intelligence” that I have to mention and that causes a lot of conversation and anguish is Automation. This basically means computer programs replacing humans in some task.

As I mentioned in a previous post on this site, Stephen Hawking said here:

“The automation of factories has already decimated jobs in traditional manufacturing, and the rise of artificial intelligence is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining.”

I’ve another take on this:

If it can be automated to save money or make money, it will be automated.

As AI tools get better, jobs such as answering phones will be automated away, if it makes financial sense to do so. If you think customers would prefer to talk to a human, you’re not realising that they soon won’t be able to tell the difference:

Google Duplex, as seen above, makes phones calls sounding like a human, complete with “mmm-hmms” and other human like mannerisms. And this is just the current iteration.

Google says it will be great for tasks such as ringing the doctor to make an appointment when your kid is sick — but I’m presuming that their real target audience is large enterprises who currently pay thousands of humans to answer phone calls. Currently being the main word in that sentence.

What does this mean for us engineers?

  • There’ll be a lot of work in building solutions to automate anything that can be automated (if it makes money or saves money!).
  • We’ll have some interesting ethical questions to ask ourselves. Again see that previous post!

What does it mean for everyone else?

I gave a couple of talks recently on this subject, and a few people told me afterward that they were “great but scary”. I don’t intend to scare anyone talking about AI and automation, but people deserve to know that technology companies and large corporations (and countless startups) are spending a lot of time and money researching and developing news ways of automation.

I always advise people to become more technology capable, and ideally to learn how to code. Coding isn’t easy, but with time and effort it can be mastered, like most skills, and can offer a whole new world of creativity and perhaps even a new career.

Technologies are my job, but also what bring me tremendous satisfaction. Creating apps, websites or whatever, seeing others use them, seeing them come to life, brings a great sense of achievement that helps us become creators, as oppose to just consumers.

So while the inexorable evolution of technology may be scary, it also brings with it fantastic opportunities — both in new jobs and also personal rewards.

The Future of Coding

What about coding itself — will that ever be automated? So much of an engineers work is writing code that:

  • They’ve probably written before for something else
  • Someone else has probably written for something else
  • Will probably break and require some time searching through Stackoverflow!

There are countless SDKs, platforms, code repositories and so on that attempt to alleviate this somewhat — by providing functions to handle common tasks, reduce the amount of code to be written, give pre-built modules that can be plugged in. But if you’re building a solution, there’s still going to be a lot of code in it.

I’ve often thought that when I’m writing a function to do something like retrieve information from a database, or post information to a service somewhere, it’s a waste of my time. Not that I’m too cool to be writing code to connect to the database, but that it’s boilerplate code that rarely changes and is actually wasting my time. I could be working on something more worthwhile — like experimenting with different UIs to see which users prefer, or adding in more content to delight my customers.

I also often look at new releases of popular SDKs and wonder, is it all a waste of everybody’s time? Are we making an industry out of creating new ways to write code, when we should be trying to write less code?

I predict that over time, more and more of coding will be replaced by no-code solutions, or AI programs that can write code.

Imagine a system where you can just say ‘I want the front page to have two login button connected to a secure database’ and it’s just created for you.

Or imagine that it actually isn’t you doing the talking, it’s a “business” person who doesn’t need you anymore! Remember — anything or anyone can be automated if it saves money or makes money!

I don’t see this happening anytime soon, but I believe it will eventually. Debates and questions such as “which is better — Angular or React?” or “should I make my app natively with Swift and Android or cross platform with Xamarin?” won’t be needed anymore.

If this scares you more than Google Duplex, just remember that you still need coders to write the AI coding programs! So it’s back to you needing to learn more about AI!

What does that mean for us?

You should still learn to code and keep learning new languages, SDKS, methods.

Coding will never completely go away, and it’s more than just writing lines into an IDE — coding is about creating, and about solving problems and making things better, be it services people use, or the world itself.

My point about the coding is that we shouldn’t assume that in 20 years we’ll still care so much about serverless architectures, Node.js, Python, or … (fingers crossed) automated test scripts! I’m joking. Honestly. I love testing, and write all my tests before my actual code.

It also means that I’d recommend that engineers look outside of coding. I’m basically the worst artist you can think of, but I constantly try to learn more about graphic design, about UI and UX, so I can make better, more beautiful apps and solutions. See one of my old, but most popular, posts here on making apps not look awful!

What else could happen?

The beauty of technology (and the world) is that you never know when or what some genius is going to create something new and change the world. When I started my career as an engineer, smartphones didn’t exist. Now so much work that I do relates to them. But like I said earlier, I can see that changing soon, too.

Here’s some varied, and potentially crazy, thoughts on what else could happen to impact us:

  • Quantum Computing — I could have spent more time on this, but don’t know enough about it yet to be knowledgeable. Basically, quantum computing could be capable of carrying out calculations heretofore impossible and also do things we may not like as much, such as breaking all known encryptions algorithms. This field is likely to be huge, and focusing your efforts here would probably not be a bad career move.
  • Driverless cars will replace mosts cars. And they won’t be called ‘cars’ (we don’t call cars horseless carriages, and we won’t call cars that move themselves, cars). See my post here ‘Your Grandkids won’t need to know how to drive’. These “cars” will be basically massive computers, with screens, voice interfaces and more, so there will be so much potential for engineers to get involved.
  • Ageing could be extended dramatically, most likely for the super-rich. If you’re a billionaire right now, you’re most likely investing in a remote hideaway in case civilisation falls, or doing research into life-extending technology. More to read on this here and here.
  • The population could explode as major illnesses like cancer and heart disease are prevented / cured, along with road-accidents being eliminated by those driverless cars.
  • The population could be significantly reduced by a pandemic or other disaster.
  • Clean, free energy could be created, either by Nuclear Fusion (read more here) or some new breakthrough. What changes would next-to-free electricity do for industry, and for consumers?
  • The Space Industry could grow massively — think asteroid mining or tourist trips to the moon. Bored of programming APIs? Why not help program a probe to mine ore on asteroids?
  • War at a scale not seen since the 20th Century.
  • Nano-technology — micro-robotics that could be used for innumerable things — medicine, manufacturing, military, security and much more. And fighting crime.
icons8 as usual!

If all that sounds a little far-fetched, just think of my Dad, born in 1949, when we didn’t have:

  • The Internet
  • Facebook, Instagram, Snapchat, WhatsApp, WeChat, Twitter, LinkedIn, Medium or Netflix
  • Smartphones
  • Personal Computers
  • Birth Control pills
  • In-Vitro Fertilization
  • DNA Testing and Sequencing
  • GPS
  • Guns ’n’ Roses

So — that’s one view of what the future could be. I could be completely right or completely wrong, or somewhere in the middle.

Before I go though, I can’t finish without mentioning:

Actual Artificial Intelligence.

If someone on this planet is clever enough to actually create a computer program that is conscious, it’s impossible to know what would happen. But I can hazard an idea. Put one way — imagine chimpanzees had somehow invented humans.

Do you think the chimpanzees would have been happy with their creation?

They wouldn’t even understand what they’d created, and they wouldn’t really understand anything the humans themselves went on to create — the Internet, the Microwave, Bitcoin, whatever.

True AI could bring enormous benefits. An entity (or entities) infinitely more intelligent than ourselves could solve problems we can’t — scientific, medical, sickness, even perhaps death. Would it choose to do so, though? Have we made life wonderful for chimpanzees?

One Last, Final Thought

I’ve used this in my recent talks as I think it sums it all up. Ken Robinson said in his famous Ted talk “Children starting school this year will be retiring in 2065”. I’m updating that for 2018:

Children starting school this year will be retiring in 2081

Can you imagine what the world will be like in 2081?

If you’ve any thoughts or comments, let me know below, and you can get me on Twitter or LinkedIn or Medium. Thanks, Andy

--

--