There's an awful lot of talk about user interfaces (and the development thereof) being unrecognizable in the next ten years. Will magically-slick, no-code tools leave UI devs in the dust? Will AI and new paradigms involving wearables and gestures leave today's 2D coders unemployable once these inevitable shifts take place?
IMHO, the answer is a strong no — if you allow yourself to adapt.
When it comes to paradigm shifts and disruptive technologies, the emPHAsis is often placed on the wrong syllABle: the looming negative effects of the disruption. In reality, UI developers are constantly absorbing skills that go beyond whatever paradigm they happen to be specializing in. And this will apply to whatever we dream up in the future if you allow yourself to see it.
Here are several ways I think the landscape will change, and how you can expect to adapt as a UI engineer.
Low-code tools can == improved roles for devs
One of the biggest pain-points in the development of products is the constant back-and-forth between designers, user feedback, and engineers. Most often, the values, interests, and perspectives between these parties are vast.
So wouldn't it be ideal if designers or enterprise users could grapple with the nuances of the UI directly and implement it themselves?
Fortunately for everyone, thanks to new visual development features in popular programming tools and full low-code platforms like Airtable, Bubble, and the like, we keep inching closer to that reality. If you look at the evolution of Apple's platforms alone, they go from pure code base UI development to little bits of freeze-dried UI called nibs to whole flows that can be laid out in storyboards to SwiftUI, which can be edited without code through a nifty editor in Xcode.
Now, all that advancement may sound like it’s taking the developer out of the equation, but I think what it will really result in is dissolving the need for developers to be pumping out lists, basic screens like settings and onboarding flows, and all manner of archetypes that you see in the most popular web and mobile apps.
Here’s why that could actually be a promotion for developers:
- It means your focus as a developer is going to shift from tedious churn of uninteresting components to being tasked with more work that truly makes use of your skills, like next-level interactions that the tools the designers use can't quite handle on their own. After the 50th time you implement a basic list of items on screen, you've had enough for a lifetime. But honing custom interactions, insertion animations, and complex reordering logic is still something that's going to require (and be stimulating for) the engineering mind.
- Even if most apps don't require avant-garde interactions, there will be opportunity to work on the various proprietary and open-source tools that will be used by designers. In other words, it's going to have to be developers who ultimately build the low-code and no-code platforms that non-developers wind up using – so it's a whole new market to work in! And I think it's a rich one: The problems one would face in abstracting away common problems will present a world of challenges that will call for years of experience grappling with those problems directly. For example, SwiftUI, and the editor that accompanies it, is only going to get more feature-rich. Right now, a lot of functionality rests on the flexibility of code and coders who know how to take advantage of that flexibility. But you could be the person who designs the UI that allows someone who's never seen code to intuitively hook up collections of models to views that they've built. It's an art of translating a common engineering challenge into something that's trivial for someone who's not accustomed to thinking like an engineer.
- While the tools to craft the UI itself catch up, there will likely be opportunity to continue in roles often called "the backend of front-end" wherein you'll be optimizing and architecting data that needs to be consumed by said UI. Because even if designers will be in a position to grapple with visual challenges as they would in Photoshop (and handle some basic data sorting and grouping), there's not suddenly going to be an interest among designers in reasoning through the challenges of data architecture and performance, no matter how good the tools get. There's a long way to go. And if the tools ever do get that good, the landscape will change such that there will be new problems to solve that are yet unforeseen.
The guidance here is to dig deeper on the advanced functionality and logic of front-end programming, add more back-end knowledge to your tool belt, and even start investigating, dissecting, and dreaming up improvements for some of today’s low-code platforms.
Voice UI and other emerging mediums
Paradigm shifts will probably change the way we interact with machines in ways we can scarcely imagine — but the human condition will not. We will still have our five senses, and even BCIs will be defined in terms of the auditory and visual material that defines the structure of our thoughts.
What does this mean? So long as humans are using machines, the first principles that govern great user interfaces will be the same. Turns out, most aren’t even tied to the visual. For example, this article is one of the first that pops up in Google when you search for UI design principles, and nearly every one is sensory-agnostic (ie: it applies no matter which senses you’re using to experience the UI).
So how does this translate to preparing for the future? We are already seeing interfaces that are non-visual in full swing. These include the obvious candidates like Alexa and Google Assistant, but a treasure trove for atypical user interfaces comes from the accessibility community.
In other words, make some things for Google Assistant as a side project. Start to wrap your head around their “action-oriented” APIs to get a feel for how activities that typically took place on screen have been reorganized for a conversational interface. The same kinds of quirks going from screens to conversations are likely to arise when going from anything we have today to even more unusual forms of interaction. So by the time that rolls around, you won’t feel so surprised.
Likewise, there’s a whole world of accessibility features and interfaces for individuals with disabilities which have been around for decades. They’ve been working on novel ways of adapting user interfaces for those who don’t have the option to experience interfaces in a traditional way, and some of the work there is sure to help you develop an intuition around abstracting your UI knowledge for anything we come up with. Something as simple as enabling accessibility features on your iOS app is a great start.
Then there’s the next level of visual UI: augmented reality. Still relegated to games and simple app tricks, the tech hasn’t reached its mainstream user interface potential, even to the degree that the aforementioned voice platforms. But the potential is there, and learning the ins and outs of Google and Apple’s standards (ARCore and ARKit, respectively) should be on most front-end and UI engineer’s to-do list at this point. The next step toward building extensive AR interfaces will be to learn a 3D engine like Unity or Unreal.
Automations may have logic, but can they build beauty?
Now you might think the biggest trump card of all has yet to be considered: artificial intelligence. If it's already going to supplant millions of jobs, why won't UI engineering be one of them? After all, there are already AI tools that can write HTML by themselves.
So what then?
Turns out, humans will always be more creative than AI. AI doesn’t think; it mimics human information processing but in a way that crunches larger loads of information more efficiently. It relies on humans for the input and direction and will ultimately continue to do so even as it gets better at reducing the number of human points of input. Thinking requires sentience, and I’ve not seen anyone in the tech world who seems to have any sort of meaningful grasp on what sentience really means.
And as the kinds of things humans directly use computers for changes, so will there be a need for humans to direct our AI to meet our new-found needs.
In other words, let’s go ahead and say that 99% of practical applications of machines are handled by AI. We’re in a post-work world. The machines are meeting our physical needs for us. We did it.
Humans aren’t going to stop doing things and creating. We’re not going to stop making meaning in the world. Even if we meet all practical concerns, our needs will evolve from an emphasis on the materially practical to the experientially practical. New tools and user experiences will need to be created, but they will simply be for different purposes like entertainment, philosophy, and art.
So, in order to prepare yourself for such a brave new world, a modern day UI engineer might actually benefit from enhancing their design chops, or dare I say it, dabbling in the liberal arts, to better prepare oneself for a fundamental reorientation around what user interfaces will even exist for.
UI engineers are not going to disappear, because the engineering skillset will always be needed somewhere along the chain of creation for user interfaces. Sure, churning out cookie-cutter components is going to be less and less an engineer’s responsibility, and automation might even make all sorts of tasks that currently require a developer extinct. But that doesn’t mean you need to become obsolete. Shifting your mentality about your role, expanding your skills, and thinking in terms of bigger and potentially more interesting problems is how you can embrace the upcoming changes and look forward to a whole new kind of professional purpose.
Triplebyte helps engineers find great jobs by assessing their abilities, not by relying on the prestige of their resume credentials. Take our 30 minute multiple-choice coding quiz to connect with your next big opportunity and join our community of 200,000+ engineers.