Up until about 10,000 years ago, Homo sapiens had lived as hunter-gatherers, traversing the land to forage for wild fruits and hunt small animals. It was a modest lifestyle, and being constantly on the move, they diverged from this way of life by setting up camp and farming in specific places for prolonged periods. Around our huge technological advancement in food production, we built our homes, places of intended security, refuge and safety.
It seems a bit of a stretch, that when talking about the future of home technology in 2020, I would start by mentioning a period of around 9500-8500 BC. The truth is, so much of our modern lives can find their footings in this period. These were the earliest days of domestic home life, where we developed our own technologies in what we hoped would promote a better quality of living.
But the work itself was incredibly labour intensive, and although an increase in food was one thing, mortality rose along with a growing population struggling with a new under-nourishing diet. To make matters worse, rates in violence increased too. With new, valuable assets like farms and permanent homes, humans had far more to protect from hungry neighbours and greedy outsiders.
Fast forward to the 21st century, and our priorities are still focused on protecting our homes and what’s in them from the outside world. If anything, we’re even more paranoid these days, and rightly so, because now there’s tech to make these crimes even more feasible. With social media-enabled cybercrimes generating $3.25 billion in global revenue every year, cybercrime is a global pandemic. Attacking through our internet connections, social accounts and the mics in our Alexas, cybercrime is a reason for paranoia and anxiety in 2020 in the same way we fear the threat of terror.
It’s hardly the secured home ideal we were after, and as it stands, it doesn’t look like our data and assets will be any less up for grabs in the future, either. Screen Shot spoke to Ray Walsh, a data privacy expert from ProPrivacy, who predicts that with augmented reality (AR) and wearables, virtual assistants will be available in both virtual spaces and in real life. “The danger for consumers is that the technology will create a prison of sorts in which everything they do is always known […] As AI is leveraged more and more—the number of secondary inferences that those datasets produce will become more precise and imposing.”
And although kind of funny at first, coming across a piece of BBC Archive footage from a 1989 episode of Tomorrow’s World, which made predictions about where technology in the home would be by 2020, further exacerbated my worries about our unquenchable thirst for pointless tech, even when it’s not serving the public’s interests.
Just like Ridley Scott’s 1982 Blade Runner predicted Los Angeles in 2019 as a dystopian abyss, researchers on Tomorrow’s World predicted no more powerpoints, but ‘plugs becoming pads, picking up power from anywhere on the wall’. From home devices that can’t compute your request no matter how ferociously you shout at it, instant pots that claim to make yoghurt right away but actually still take a very, very long time, and 5G (when 4G actually works pretty good already and isn’t potentially responsible for making hundreds of birds drop down dead around its cellular), have we all lost interest in what makes practical sense?
I’m not saying that we should return to our archaic ways of laboriously hacking at the soil with basic stone tools for our dinner—after all we’ve learned, that would be pretty ‘un-Sapien’ of us. But there are pros to the past that we seem to have forgotten in our obsession with the new, if not from 8,500 BC, but from more recent periods in history that didn’t rely so much on modern equipment that is contributing to our planet’s swift and unforgiving downfall. Perhaps by opening ourselves up again to our vulnerabilities of the past, we might actually stand a chance?
As the self-proclaimed ‘wise man’, it’s time to use our knowledge and regulate the rate in which we innovate and encourage the mainstream production of avoidable domestic tech, and focus our energies into more urgent avenues, like reversing some of its effects. In a sentiment I share, Amy Flemming from The Guardian recently made the case for “low-tech dumb cities instead of smart ones,” an argument deeply rooted in the ecological benefits of revisiting former processes that use little resources. Citing the Ma’dan people of Iraq and their floating buildings known as ‘mudhifs’, these structures made from natural reeds can be dismantled and re-erected in just a day. There’s certainly something very hunter-gatherer about the temporality of their eco-friendly living situation.
With the threat of the climate emergency hanging around our necks like a noose, the mortifying irony of our situation is staring back at us point-blank in the face. Greta Thunberg’s chilling declaration, “our house is on fire,” became quite literally true last year, with the 2019 Californian wildfires and then the more recent Australian bushfire crisis that still rages on in parts of New South Wales and Canberra. With thousands of houses being burnt to a crisp, where was our shiny new-age technology in helping us put out the fires and protect our homes?
The need to innovate new domestic tech once had an argument for improving lives in many factions of society—be it low-income families, the elderly, the disabled, the busy, and the lonely—but in many cases, our new developments have focused on mostly novelty extravagance; less excusable than ever with our rapidly overheating planet.
It’s 2020, and I’m calling for a new era of responsibility, a conscious effort to only use technology that truly enriches our lives, and to uphold the homestead as a safe space to look at memes and watch as many true crime documentaries on Netflix as I want, without every data delinquent in Silicon Valley knowing about it. After all, tech to protect us from other tech is already on the rise—a warning sign that we should all take seriously.
Networked and smart home devices are increasingly commonplace—from Alexa to Nest to Amazon’s new voice-activated device—although these developments aren’t always for the best. While some have raised concerns about privacy, the idea of collecting more information about how people react to specific environments has been appealing for researchers around interior design and neuroscience for many years. The idea of a smart office is becoming more commonplace too, moving into the future through using novel technologies to maximise productivity and create a better working environment overall.
This year, at the Salone del Mobile in Milan, Google and Johns Hopkins University teamed up to unveil an interactive installation focused on the area of neuroaesthetics, which examines the relationship between the brain and visual input. The installation, called Space for Being, had themed rooms with different names. Google’s head of design for all hardware products, Ivy Ross, and architect Suchi Reddy chose a range of furniture, colours and materials to evoke specific emotions. Each visitor, upon entry, was fitted with a Google-designed biometric band which measured their heart activity, breathing rate, temperature and body motion, all of which was collected for assessment by researchers from Johns Hopkins University at the end.
In some sense, these fields of research have already existed for many years—interior design, architecture and other areas of research around the built environment look at how people respond to the space around them. Common knowledge dictates that a child’s nursery shouldn’t be painted in tints of bright red or lime green, and increasingly, modern offices tend towards minimalism and neutral decors to absorb multiple businesses or companies. Research has indicated that the environment which you’re in can affect your mood.
When it comes to the modern workplace, these concerns get turned into fears around productivity, whether it’s finding out how it could be increased, or what problem areas there are. If workplaces were able to collect data about how employees actually reacted to their physical environments, the logic is that they should be able to make fixes or tweaks which could improve productivity.
To some extent, this kind of monitoring, and this small scale collection of data about minute details of our lives, is already very commonplace. Fitbits have surged in popularity in the last few years, and there are a whole host of other tracking technologies which people already willingly use. So perhaps it’s unsurprising that the new workplace (and the people who design it) would want to use some of those insights.
In theory, this could mean that smart offices are able to measure some of the mildly uncomfortable things about working in an office environment. Do you spend the majority of your day shivering, or constantly fidgeting because your chair is uncomfortable? Does your concentration rate drop off after a certain length of meetings? Those ideas sound innocuous enough, especially if they’re geared toward making minor changes around convenience. But they could be potentially troubling too.
As other research has suggested, networked devices and smart home gadgets can often be turned into tools of monitoring, 1984 style. At least in a house, the idea is that people could turn off these devices or disable their capabilities. But in a professional environment, individual employees don’t have that much autonomy if an employer decides to deploy a new, productivity-enforcing technology. That could include monitoring how much people use their phones at work. It’s also worth remembering that these conversations around smart offices ignore that blue collar workers—such as those who work in Amazon warehouses—are already subject to this kind of micro-monitoring, through biometric bands, but have few other options, or ways that they can meaningfully resist.
On one hand, this could have benefits for employees on a very minor scale, such as adjusting temperatures in different rooms, or potentially reducing the length of meetings. But at the end of the day, that’s the kind of problem which could be solved by better communication, or asking employees for feedback regularly. In reality, no one really wants your employer to have information on all of the things you do, whether it’s networked devices in the office which measure how long you’ve been away from your computer, or how sleepy you feel after a meeting. A careful line has to be tread between minimally invasive products and ones which could be actively harmful, particularly once workplace dynamics are taken into consideration.
There are privacy concerns as well. In the Google installation, visitors could see that their data was being erased right in front of their eyes. Although the data that’s collected through bands like these are arguably not that incriminating—who’s heartbeat doesn’t speed up slightly when they’re giving a presentation, for example—it’s more the principle of a data leak which could be worrying. If the workplace of the future can surveil employees in these ways, some would argue that it’s a slippery slope until they can start tracking employees in other ways too.