Surveillance technology in the workplace

Every move you make

Few things have been more transformed by the pandemic than how we work. Everyone got sent home in April of 2020, and – unexpectedly – work continued to be performed. Differently – but companies still functioned, salaries got paid, and the economy, which had been expected to implode, chugged along without any major disruptions.

We couldn’t have expected that sort of outcome even a decade earlier. If the pandemic had struck in 2010, we’d have seen the wheels well and truly fall off the cart, as businesses, schools and the institutions of government all shuddered to a halt. Too few people had access to the kinds of connectivity and tools that make decentralised-yet-closely-coordinated work possible. Many of those tools – such as the now-ubiquitous Slack (and its Microsoft clone, Teams) hadn’t even been invented!

Today, around well over half of office workers either want to work remotely all the time or want the flexibility to decide when they come into the office. This ‘hybrid’ world of work feels like a continuous negotiation between employers and managers: managers plead for their staffs to return to the office, while their employees demand good reasons before they’ll invest the hours (and dollars) commuting. Something considered table stakes just three years ago now needs to be carefully justified.

If the pandemic had struck in 2010, we’d have seen the wheels well and truly fall off the cart.

This means that whether staff sit at a desk in a CBD office tower, or sit at home in a fleece tracksuit, they need to be continuously connected with these new remote-working tools. Something only occasional pre-pandemic has become essential and continuous. To be at work means to be plugged into one’s colleagues, maintaining a ‘continuous partial attention’ of their tasks, their priorities, and their capacities.

All well and good, you’d suppose?  But here’s where it gets complicated: being online all day long means each of us create a stream of interactions that can be used as the inputs for systems designed to penetrate to our psychological core.

Not long ago I watched a number of brand-new tech startups pitch their products, most of them repackaged ideas I’d seen endless times before. One stood out: promising a vision of corporate harmony and fidelity – ‘all watched over by machines of loving grace’.

Something considered table stakes just three years ago now needs to be carefully justified.

This brand new product digests all of the communications created by a company – all of its emails, text chats, and Slack-like group messaging tools, feeding them into a sophisticated machine learning system that both models the behaviours of the individuals generating this stream of communication. It keeps an eye out for any signs these individuals may be expressing unusual levels of frustration, despair, depression, anger, or abuse.

All of this gets pitched as ‘wellbeing’ – that this system will be able to maintain a continuous awareness of the mental health of the staff, helping them to manage their emotional state – for their own well-being, as well as the health of others in the office. No one thrives in a ‘toxic’ workplace, and nearly everyone has had an experience of working in an office environment where that one employee has made it difficult for everyone else. So there are clear benefits to this sort of tool.

But there are also some very obvious costs.


More on surveillance: How much do you value privacy?


If you knew that your every communication with your ‘work family’ were being analysed for its emotional intention and impact; further, if you knew that each communication constituted another link in a chain of interactions, all of which collectively constituted a machinic ‘assessment’ of your mental and psychological fitness for the office environment – well, who in that situation wouldn’t immediately begin to self-edit all of their communications?

And worse yet, what if all this monitoring happened behind the scenes, invisibly and covertly, until a ‘counselling’ moment, when that surveillance suddenly surfaced in a termination, or even a less dramatic reprimand? How would that employee feel? How would their peers feel? Could anyone feel safe in that office ever again?

Who in that situation wouldn’t immediately begin to self-edit all of their communications?

These are not wholly new ethical issues. Whether it’s a person behind the scenes or an algorithm, the questions of who’s doing the monitoring, why, and to what end are always the first that need to be answered. Handwaving about ‘well-being’ does not justify continuous surveillance: Down that path lies China’ssocial credit’ system, wherein all of a citizen’s activities accrue into a ‘score’ that either grants or denies them access to a range of benefits.

While it’s early days to measure the proposed benefits of any such workplace monitoring system – in particular, if it can offer precise interventions in toxic workplace environments any better than humans – it points to a more generic capability that has become a feature of our environment: we are continuously streaming our interactions into systems that model our behaviour.


Watch our Cosmos Briefing: Great Resignation: What’s the future of the workplace?


Facebook went down this path after its Initial Public Offering, using interaction data to build simulacra of its users, and then deployed those simulacra to fine-tune content presented to users on their personalised newsfeeds – dramatically increasing time those users spent on those newsfeeds. In 2017 a leaked series of documents revealed that Facebook had real-time information on the emotional state of its users, drawn from that stream of interactions.

In the years since, the data we stream into these systems has grown exponentially: every smartwatch, smartphone, smart speaker and app contribute to this stream, all of it continuously analysed: to help nudge us into particular buying decisions, life choices, and so on. Much of our lives have already been circumscribed by these pervasive systems, which know so much about our own emotional states – even our toxicity – yet reveal nothing.

Handwaving about ‘well-being’ does not justify continuous surveillance.

We tend to associate that sort of ‘informational asymmetry’ – between ‘what is known’ and ‘what is shared’ – with warfare and arbitrage, not with the workplace or our oh-so-helpful devices. Are these devices helping? Are these machines truly working in our best interests? And if so, why do they not reveal themselves? That sort of transparency isn’t merely an ideal; in order to be free to act with the full range of agency, we must be aware of who or what stands in judgment of our actions, and how they prosecute their case. That’s the essential lever we need to balance the scales, ensuring that we have a chance not just to act, but, where necessary, to push those forces aside, make our own choices – and, yes, our own mistakes. Once we have secured that freedom we can look to the support of others, including those ever-more-sophisticated systems. Without that freedom, we will simply be ruled by them.

Please login to favourite this article.