Data And Privacy Protection at Workplace

inumankhan
23 min readSep 8, 2023

Introduction

Data-driven technologies are increasingly accumulating and processing information throughout the entire spectrum of modern society and human activity. This datafication affects almost every aspect of existence. (Cukier and Mayer- Schoenberger, 2013; Neff and Nafus, 2016). As a result, it’s no surprise that Human Resource Management has begun to adopt datafication for its key functions. Companies are becoming increasingly interested in data analytics’ promise of monitoring their employees’ behavior and performance in the office, which might occasionally extend to non-work-related behavior. ‘People Analytics’ promises to raise the bar on human resource management. There is frequently an underlying idea that technology will make people’s management decisions more objective, efficient, and less likely to serve individual preferences (Finlay,2014). As a result, personnel in a variety of businesses are increasingly being monitored on a minute-by-minute basis ( Ajunwa, 2020; Ball, 2010; Mateescu and Nguyen, 2019; Prassl, 2018). Monitoring and surveillance span across all levels of employment and remuneration, from call center workers to senior executives, and is increasingly affecting ‘thinking work.’ (Phan et al., 2017). Banks in the City of London, for example, use surveillance technology to determine whether or not staff are there (Morris et al.,2017). Some organizations go so far as to deploy more physically intrusive surveillance methods, such as microchip implants that connect personnel to the company network (Astor,2017).

COVID-19’s global expansion in the spring of 2020 has boosted the application of workplace analytics substantially, because of at least two factors: For starters, many of the tools used in datafying the workplace is now being used and/or repurposed for public health purposes, such as tracking workers’ social distance in factories and warehouses (Vincent, 2020). Second, since the number of people working from home has expanded dramatically, there has been a considerable increase in demand for software solutions that provide remote surveillance and management capabilities, bringing the datafication of work directly into people’s homes (Collins, 2020; Frantziou,2020). These developments create serious ethical concerns, particularly in light of the greater shift away from a human to computer decisionmaking (Mittelstadt et al., 2016). The following is the outline of our talk. The first segment depicts workplace monitoring and algorithmic management tactics, exposing employee privacy concerns throughout the data life cycle, from collection to destruction. Following that, we examine existing frameworks for employee protection, ranging from legal to design-based methods, noting their strengths and flaws. We next turn to the B&HR approach as a way of offering a structured process for mapping risks, identifying privacy gaps, and anchoring privacy due diligence in corporate practice based on that conversation.

Employee privacy issues along the life cycle of data

Algorithmic management has evolved to supplement, if not completely replace, traditional employer tasks (Trade Union Congress, 2018): While hiring is likely the most obvious application of algorithmic management to yet, Big Data HR can also be used to score employees’ productivity (Heaven, 2020), tracking day-to-day work behavior and, in some cases, terminating employment ties by firing workers with low ‘rates’ as calculated by an algorithm (Steele, 2020). While space constraints prevent a thorough description of these technologies at this time (Neff et al., 2020), To put it another way, the quick rise and spread of algorithmic surveillance and supervision at work have resulted in a substantial shift in the way people operate. It’s easy to imagine the ramifications of such a breach of privacy. Data analytics is being used in a variety of businesses to monitor and, to some extent, anticipate individual future behavior, such as determining employee mood and willingness to undertake a task (Eubanks, 2018; Waddell, 2016). To link and analyze enormous data sets, some businesses, for example, employ neural networks (Cheekoty, 2019). These methods can reveal a lot about a person’s tastes and behavior, but they’re typically criticized for not being totally traceable (Monahan, 2016; Pasquale, 2015). Employee privacy is at risk throughout the whole data life cycle (European Parliament Position, 2014: Recitals 71a, 71b, Art. 23 para. 1, Art. 33 para. 3), which can be divided into four periods in terms of privacy concerns arising from data processing (Tamo-Larrieux, 2018):

  1. Employees may be subjected to monitoring, a lack of openness, and understanding about the data gathering process, as well as being the target of major power imbalances (Felzmann et al., 2019). Employees’ freedom and autonomy to exercise proper control over their privacy in terms of data gathering may be limited or non-existent.
  2. 2. Due to knowledge asymmetry, employees may be unaware of data analysis. An analysis may have a mistake, resulting in misrepresentation/bias of individuals or groups of individuals (Hong, 2016) and human interaction will be dehumanized as a result.
  3. 3. The use of data as a decision-making tool may result in discrimination against a group or individual employee, as well as a lack of autonomy over data implementation and use.
  4. 4. When dealing with the erasure of employee data, a corporation may overlook the necessity of ‘forgetting’ employee data and may lack autonomy, transparency, and accountability.

Misconduct involving the use of data can lead to a phenomenon known as ‘function creep,’ in which the data obtained is used for purposes not previously communicated (Christl, 2017). Furthermore, data transparency or insight into specific analytics processes that exploit personal data may be an issue, as many employees find themselves in a weak position to demand transparency or insight.

The inherent conflict between datafication and privacy

The more data collected about individual employees, the more useful these tools become for making predictions. While datafication technologies, such as AI, rely on vast volumes of data to improve accuracy, many privacy laws would require compliance with the data minimization principle (e.g., GDPR Art. 5), which is fundamentally in conflict with Big Data approaches. Any organization must walk a tightrope in this situation: While privacy may be a top priority for compliance or corporate governance departments, business intelligence, and HR management may be more interested in gathering and processing as much data as possible (Koops and Leenes, 2014).

Existing Frameworks for the protection of employee privacy at the workplace

The following sections look at some of the present frameworks for protecting employee privacy in the workplace, emphasizing the possibility for legal and technological solutions to resolve the inherent conflict of interest between Datafication and Privacy. In the context of broader socio-technical conceptions of privacy and workplace monitoring, we also identify a number of potential flaws in both legal and technical remedies.

Legal protection of employee privacy at the workplace

An employment contract’s and an individual employee’s applicable law is always tied to a specific jurisdiction. While firms may be held liable for workplace privacy violations under national labor or data protection laws, personal data records appear to float freely across nations. Companies see greater international pressure to deal with privacy issues, mainly as a result of new laws in Europe, the United States (California Consumer Privacy Act (CCPA), 2018), and Brazil (CCPA, 2018; Singer, 2019; Thomas, 2019). As a result, data-driven workplace surveillance is a trend that extends beyond a country’s borders. Multinational corporations operating in many jurisdictions with a variety of privacy laws will work to find a solution that preserves privacy across borders ( Bhave et al., 2019; Guild, 2019). A rising body of research is looking into the importance of privacy and data protection in the workplace, with a focus on European regulatory frameworks (Brassart Olsen, 2020; Otto, 2019; Simitis, 1999). Given the EU’s early regulatory innovation, the so-called ‘Brussels Effect’ has a strong chance of occurring (Bradford, 2020) will result in the enactment of comparable legislation in jurisdictions outside of Europe. The European Convention of Human Rights (1953), the European Union Data Protection Regulation, and a variety of other different yet overlapping and tightly interconnected legal frameworks in Europe all try to protect aspects of employee privacy (GDPR, 2016), National employment law systems and the EU Charter of Fundamental Rights (2012) The European Convention on Human Rights of the Council of Europe (1953), as well as state legislation, demand one facet of privacy: Respect for private life, extends to workplace privacy as well, as the European Court of Human Rights stated in Niemitz v. Germany (1992; Grabenwater, 2014). Furthermore, the GDPR, or General Data Protection Regulation, addresses privacy concerns arising from datafication in the workplace (Otto,2019). The GDPR applies directly to private actors in EU member states and has some extraterritorial implications (GDPR). Workplace data can only be collected for “specified, explicit, and legitimate reasons” (GDPR, Art. 5(1)(b)); sensitive data, such as “racial or ethnic origin, political ideas, religious or philosophical beliefs, or trade union membership,” is further protected (GDPR, Art. 9(1)).

GDPR Article 22(1), for example, gives employees the “right not to be subjected to a decision based entirely on automated processing… that creates legal effects concerning [them] or similarly significantly affects [them].” As the WP29 Guidelines for the EU General Data Protection Regulation (GDPR) Regulation (EU) (2016) make clear, this requirement is to be construed broadly: simply ‘fabricating human interaction’ will not suffice. In the context of employment, it appears that the use of automated scheduling software and other management tools is prohibited under Article 22 since they usually entail “decisions that deny someone an employment chance or place them at a substantial disadvantage” (WP29, WP 251rev.01: 22). Article 22 does, however, include a number of exceptions.

The Promises and pitfalls of design-based approaches to uphold privacy at the workplace

A variety of tech-design-based solutions have also been presented to supplement current legal frameworks for the deployment of people analytics software, however, they are not always able to address all of the issues we’ve mentioned. PbD is one such tech-based method to provide improved cross-border privacy protection by embedding design requirements of information technology, accountable business practices, and networked infrastructures (Cavoukian, 2012; Loops and Leenes, 2014; Rubinstein, 2012). PbD has the capacity to protect personal information and avoid legal action. It is based on a decentralized set-up of privacy safeguards that could be ideal for international people management practices’ privacy protection ( Koops and Leenes, 2014). Scholars from several regions and disciplines have high expectations for PbD: PbD is regarded as a practical compliance enabler that ensures significant procedural regularity (Kroll et al., 2017; McQuay and Cavoukian, 2010). Some experts believe that without PbD, achieving real privacy protection in the twenty-first century will be difficult, if not impossible (Dix, 2010). Indeed, designbased techniques are becoming increasingly popular outside of the tech industry, and they are gradually being incorporated into legal frameworks. Data protection by design is an ‘appropriate solution’ to comply with data protection regulations, according to the GDPR (GDPR, Art. 25 para. 1). The European Court of Human Rights (ECHR) was one of the first to embrace PbD-like notions, such as in the case of I v. Finland (2008).

Missing contextuality for proportionality & consent

At this time, it is not feasible that a specific design can respond to legal scenarios in which all conditions or contexts must be considered. When the proportionality of a privacy-invading measure or the voluntariness of permission is at the question, such context is critical. In the ECHR’s jurisprudence, as well as in national and European Union law, the notion of proportionality is well established. In three significant cases involving workplace privacy before the ECHR, the fair balancing test between the interests of the employer and the employee, taking into account all circumstances, has been a central element: In Köpke vs. Germany (2010), the case included video surveillance of a supermarket clerk who was suspected of stealing. Consent, in addition to proportionality, is a major source of worry in the workplace. Prior, informed, and free consent is required, which necessitates extremely detailed information regarding the context of data collection and usage. In cases of subordination, when there is a considerable economic or other imbalance between the controller getting consent and the data subject offering consent, free consent can be in doubt and hence unlawful (European Union Agency for Fundamental Rights, 2018: 397). Consent should not provide a valid legal ground for the processing of personal data in situations where there is an imbalance between the data subject and the controller, as the GDPR recitals make clear (GDPR, Recital 43), Consent does not justify the processing of personal data, for example, between an employer and his or her employees, unless under extraordinary circumstances (WP29 Opinion). When evaluating the validity of permission in the workplace, the conditions under which it is provided should be carefully considered (European Union Agency for Fundamental Rights, 2018: 332). As a result, employees must have a genuine choice about whether or not to consent. A PbD approach will almost certainly be unable to incorporate all essential facts and circumstances in order to determine the validity of a particular consent. When it comes to monitoring techniques, it’s critical to make sure that they’re limited to the ‘right context — actual workplaces and actual work activities,’ as defined by a boundary that can’t be crossed through ‘notice-and-consent mechanisms.’ (Ajunwa et al. 2017) have argued.

Technological progress outpacing design-based privacy protection measures

The current state of technological advancement has the potential to completely transform PbD’s capabilities over time. On the one hand, privacy-protecting technology is progressing more slowly than freshly designed privacy-invading technology (Montjoye et al., 2013). As a result, security flaws are commonplace. However, it is possible that the lack of technical innovation is merely a temporary issue. On the other side, today’s technology may not be able to protect against privacy risks produced by tomorrow’s technology, such as the possibility of anonymization in Big Data environments failing to owe to potential re-identification (Rocher et al., 2019): As a result, there is a gap between privacy-invading and privacy-protecting technologies. To summarize, similar to the promises of specific legal solutions, PbD is an appealing and widely recognized digital age technique for protecting employee data. However, we’ve figured out why PbD only provides little workplace privacy protection. As a result, we must look for a new defensive strategy that will work in tandem with existing ones rather than competing with them.

A broader privacy approach is necessary

We’ve proved that PbD’s tech-based solutions aren’t a panacea since complicated privacy issues don’t have a simple solution (Dix, 2010; Hartzog, 2018). While businesses have an interest in maintaining employee productivity and preventing workplace misconduct, the methods in place to achieve those goals do not support invasive procedures for quantifying social modes of interaction and related performance goals (Ajunwa et al., 2017). Furthermore, data-driven models that quantify human behavior are not immune to inaccuracies or incorrect conclusions about human interaction (Nagy and Neff, 2016), and developers’ social assumptions and underlying cultural ideas are frequently included in the models (Ustek-Spilda et al., 2019). As a result, any cumulative hazards for privacy from ‘toxic mixtures’ arising from various business reasons must be identified and addressed using a human assessment. For instance, data for health prevention (such as Covid-19) may be linked with data for performance monitoring to reveal a lot about an employee’s personal life. To understand how and when employees are subjected to monitoring technology and how they react to it, a company must address the socio-technical concept of datafication (Neff et al., 2020). For the whole data life cycle, organizational measures must be systemically integrated and follow an on-going, consistent review of potential privacy threats. ‘Accompanying’ organizational measures for accountable business practices are among the major drives behind the notion of PbD, but only to the bare minimum. These supporting precautions appear insufficient when compared to the scope and intrusive nature of Big Data approaches. Executive management or legal compliance silos cannot make privacy-related decisions. A broader approach to privacy is required, one that might include ethical requirements for fair treatment of employees by management when making decisions concerning workplace monitoring methods. All affected stakeholder groups’ representatives must be strategically involved.

The holistic model to uphold privacy in the workplace offered by Business & Human Rights

Knowing how organizational stakeholders make meaning of the technology in use and how much agency they have is critical to understanding how an organization may use technology without infringing privacy (Nagy and Neff, 2015; Wagner, 2019). A fundamental advantage of a human rightsbased approach is that the rightsholder is the center of focus rather than being treated as a “passive” data subject. At the same time, it’s critical to recognize the dangers of a fundamental rights approach to workplace protection, particularly the charge that it’s atomistic, reducing worker solidarity, and thus potentially exacerbating the very inequality of bargaining power that necessitates protection in the first place (Youngdahl, 2009). Through ongoing stakeholder engagement, which might include trade unions, works for councils, or other worker representative bodies, one method to solve that difficulty is to ensure that collective, as well as individual employee voices, are brought back into the dialogue. B&HR’s rights-based strategy advocates for the prevention, mitigation, and remediation of negative human rights impacts across all business processes, and it applies to the workplace as well as to company workers. It has the following three broad advantages for workplace privacy protection (Alston, 2005; Ruggie, 2007, 2013; Wettstein, 2015, 2016): With the Universal Declaration of Human Rights and the UNGPs, it first refers to a generally established frame of reference (2011). Second, through human rights due diligence, it presents actual management recommendations and processes that can be linked to current risk assessment processes within the organization to achieve human rights-respecting corporate activity. Third, the concept of B&HR reaffirms the state’s responsibility to protect human rights, including in the field of technology (OHCHR B-Tech, 2021), while maintaining a non-static perception of the state and emphasizing the obligation of business to respect human rights, such as workplace privacy, and providing ‘human agency’ to all affected stakeholder groups through stakeholder engagement (Wagner, 2019). In accordance with the UNGPs, all businesses have a corporate duty to respect human rights in all aspects of their operations. The concept of corporate responsibility as defined by B&HR differs from that of corporate responsibility as defined by an academic discourse on ‘Corporate Social Responsibility (CSR) or ‘AI Ethics’: There is no universal reference framework for CSR and AI Ethics, and definitions vary per firm, ranging from voluntary efforts to industry self-regulation (Smuha, 2020). As a result, both CSR and AI Ethics have been chastised for obfuscating negative information (‘whitewashing’), rather than addressing fundamental causes or mitigating genuine hazards (Wagner,2018). Simultaneously, the B&HR method does not ignore ethical considerations but rather links them to the UNGPs as a starting point: Considerations from the many perspectives within the AI Ethics discourse might be integrated into the due diligence process (Smuha, 2020).

Introducing Privacy due diligence

Following is a description of how a corporation can implement privacy protection at work using a due diligence approach. Our Privacy Due Diligence model is based on the UNGPs’ standards for human rights due diligence and incorporates insights from the literature on privacy and human behavior analytics (Ajunwa et al., 2017; Boyd and Crawford, 2012; Keats Citron and Pasquale, 2014; Prassl 2019). We adapt the concept of human rights due diligence and lay out the standards for maintaining workplace privacy protection. This strategy goes beyond a design-based approach by including key data protection and employment law considerations. It is vital to highlight that in order to do this, a company’s Privacy Due Diligence process must be tailored to its specific business model. The UNGPs do not provide a specific blueprint for human rights due diligence, and the notion has been implemented in a variety of ways. The focus of all due diligence stages is on the rightsholder and the dangers of infringing on its rights, as well as mitigating and resolving potential harms (OHCHR B-Tech, 2021). The four-step logic of the Privacy Due Diligence methodology we propose is as follows: (1) Mapping the ‘privacy footprint,’ (2) Privacy gap analysis, (3) Prioritizing measures, impact reduction, and management, and (4) Workplace privacy protection anchoring This approach integrates essential parts of the GDPR while also proposing a dynamic solution for management to deal with workplace privacy issues on an ongoing basis while also enhancing external stakeholder participation. This adaptability is required to keep up with the rapid pace of technological advancement: ‘no one can foresee with confidence all of the ubiquitous-computing advances that will emerge in the future years, and realizing their full potential will be difficult.’ (Cascio and Montealegre, 2016: 354).

Mapping the ‘Privacy footprint’

The first step is to determine the scope of the privacy issues. To appreciate the privacy footprint, a solid understanding of the technological state-of-the-art and its potential from a technical standpoint is required. To comprehend all of the privacy concerns of a company’s workforce monitoring methods, it is necessary to connect with a wide range of various audiences. It’s critical to ask what the purpose is (purpose specification) and consider data reduction techniques before processing data in the workplace. Companies must understand where their operations have the greatest impact on employees’ privacy. The following are some key questions: Which groups are affected by privacy issues, and how? Who could be especially vulnerable? As part of the Privacy Due Diligence, the privacy impact assessment should comprise a hybrid methodology that includes both internal stakeholder participation and strategic involvement of extra external stakeholders. While the GDPR, for example, suggests a data protection impact assessment (DPIA) to evaluate how personally identifiable information is collected, utilized, shared, and preserved within a company (Hartzog, 2018), It does not specifically state that the expressed viewpoints must be taken into account, nor does it include potentially affected stakeholders other than data protection officers, workers, and the supervisory body (GDPR, Art. 35 para. 2, GDPR, Art. 35 para. 9). If improved by strategic consultation with potentially affected stakeholders, GDPR DPIAs can be integrated into Privacy Due Diligence. A procedure that is primarily internal risks being skewed in favor of the company’s interests, whereas a method that is solely focused on reporting to a supervisory authority misses the essence of due diligence’s ongoing nature. As a result, the Privacy Due Diligence approach incorporates both external and internal stakeholder participation.

Privacy gap analysis: Identify existing processes and potential disparities

In the second step, the corporation creates an inventory of privacy-protective mechanisms in place to assess where gaps exist in data-based management procedures in terms of privacy protection at work. As previously stated, workplace monitoring must be limited to its proper context, the actual workplace, and the actual work duties, and this need should not be waived through notice-andconsent processes (Ajunwa et al., 2017: 774f). Some businesses may uncover design-based solutions to rising privacy challenges during this stage. However, as previously stated, technological solutions only protect against privacy violations to a limited level. A corporation enters a grey area of responsibility from the perspective of B&HR. Again, involvement with rightsholders is critical, necessitating proactive, intentional human interaction rather than relying only on technology to assess a situation (see step 1). Thus, data and privacy protection must not only adhere to legal requirements but also address ethical concerns. As a result, a gap analysis entails at least two steps: A. Is it true that all legal requirements have been met? This includes considering the context, proportionality, and permission, as well as ensuring that legal terminology is defined clearly and that protective measures are technologically up to date. B. Are such ethical concerns about privacy (legal grey zones) addressed, which could put managers or employees in a socio-technical bind? For a sound company policy on privacy across jurisdictions, this gap analysis extends beyond the legal framework and addresses difficulties emerging from regulatory gaps or differing legal ideas between jurisdictions. There are well-established gap assessments that focus solely on the legal dimension and address basic issues like legal basis and transparency, as well as data security. Gap studies like this are simple to incorporate into a Privacy Due Diligence strategy. The methodical treatment of such regulatory loopholes, on the other hand, is more difficult. The discovered deficiencies are highly reliant on the specific company activity, industry, or workforce type. The technological capacity must be taken into account while providing transparency about analytics models (descriptive, predictive or prescriptive analytics). In addition, the privacy gap analysis must take into account the model’s business purpose: Which sample of input data and categorization results, for example, are chosen? Are there any external variables that could cause bias in probability calculations? Could this damage individuals of color, for example, by preventing them from receiving a promotion or dismissal owing to systematic prejudice in the data model (Buolamwini and Gebru, 2018)? A managerial decision must be made about how to establish transparency and minimize privacy intrusions that can lead to discrimination and harm other human rights, among other things. Employees must be informed about data analytics in a reasonable and proportionate manner to the degree of the analytics measures, and they must have a role in how such systems are deployed, according to stakeholder engagement (Wagner, 2019). Privacy Due Diligence is more effective than a merely legal or technological evaluation at identifying and addressing privacy gaps. For example, ‘Hubstaff’ provides software that records employees’ keystrokes, mouse movements, and visited websites, while ‘Time Doctor’ uses a webcam to take videos of employees’ screens and/or images every 10 minutes to ensure that they are at their computers (Heaven, 2020). Many decisions in this area are still made by internal decision-makers, although they should be subject to broader stakeholder engagement methods. Such judgments must be made in accordance with human rights obligations; for example, inferring political ideas, sexual orientation, or health information from an individual’s clicking and browsing tendencies at work is scarcely justifiable. Often, rightsholders are unable to fully predict future privacy threats: The possible repercussions of new technology, in particular, cannot be fully assessed at this time. It’s critical to consider normative issues that arise from the use of workplace analytics to ensure ethical acceptability. As a result, a legally and ethically rigorous gap analysis cannot process an existing catalog of standardized, defined things, but rather necessitates a deeper assessment based on ethical critical concerns about what constitutes the right to privacy.

Prioritizing measures, impact mitigation, and management

Often, rightsholders are unable to fully predict future privacy threats: The possible repercussions of new technology, in particular, cannot be fully assessed at this time. It’s critical to consider normative issues that arise from the use of workplace analytics to ensure ethical acceptability. As a result, a legally and ethically rigorous gap analysis cannot process an existing catalog of standardized, defined things, but rather necessitates a deeper assessment based on ethical critical concerns about what constitutes the right to privacy. A corporation must detail how the holes identified in step 2 for significant privacy issues may be resolved for impact reduction and management. Employees at Deloitte and Bank of America, for example, were allegedly required to wear badges that recorded everything they saw and heard by analyzing the wearer’s speech, its volume and pitch, the length of time spent in a particular location, and mapping the daily paths enabled by beacons through the office space (Steele, 2020). While all of this may appear to be promising in terms of providing information to the people management department, such intrusive procedures are frequently unjustified and may not be linked to the actual output that they are designed to assess (purpose limitation). When you ask the purpose question, “I’d want to analyze my employees’ productivity, hence I require this data,” the answer should be “Why?” What foundation do you have for this? Isn’t it an infringement of their privacy that you need to know about this?’ Some technologies appear to have a clear function at first glance, but a closer examination reveals significant privacy flaws: Smart jackets for first responders, for example, might have modules that track heart rate, temperature, movements, and location (Steele,2020). Some modules, such as body cams, are active even when there is no emergency, allowing you to track task completion and monitor productivity. When weighing the need to monitor work performance against privacy intrusion, due diligence may reveal that this is not an appropriate use of data-driven monitoring. A geolocation monitoring system that tracks a van delivering packages and sends customers notifications when a package arrives, on the other hand, appears to be less problematic at first glance. However, if the van movement measurement can be used to instruct, such as when an employee is allowed to take toilet or lunch breaks, it might become contentious (Schafheitle et al., 2020). Furthermore, in the workplace, privacy protection clashes with the power dynamic between employer and employee, as well as the possible pitfalls of a consent-based approach, as previously noted — employees should not be forced to give up their privacy rights in exchange for labor (Ajunwa et al., 2017).

Anchoring Privacy Due Diligence in Business practice- reporting, evaluating, learning

Management must find a means to make constant reporting, review, and learning about the privacy consequences of its business matter within the organization in order to embed Privacy Due Diligence into business practice. Are there dedicated accountability and oversight procedures for workplace monitoring in consultation with affected stakeholders, for example? Diverse membership and composition in accountability governance structures, as well as a clear, open procedure, are essential, with a focus on taking into consideration the perspectives of potentially excluded voices. Tactics like a policy commitment at the top level, laying out a company’s privacy standards, raising knowledge about data processing procedures, or grievance systems for employees and workers to speak out against intrusive measures could all be key parts. The UNGPs recommend that operational grievance channels be made readily available to stakeholders who may be harmed. Integrating preventive and corrective procedures to act against adverse privacy impacts should be a part of embedding Privacy Due Diligence in company practice. The GDPR’s remedial rights for data subjects can provide complementarity (see GDPR, Arts. 15, 16, 17, 18, 20). A feedback loop should ensure that past failures are learned from and that privacy practices are improved: Rather than performing a static one-time review, management should reassess accountability measures on a regular basis, based on strong stakeholder involvement. Individuals doing the review should have the authority to update data models and algorithmic conclusions as needed and should be able to do so on a regular basis if necessary. Rather than ‘outsourcing’ the developing privacy challenge to the data protection authorities, management takes responsibility for it through such structural solutions. Such management ownership of privacy issues is commensurate with the growing challenges to privacy in today’s data-driven workplace.

Conclusion

We set out to investigate the potential of a B&HR approach to address the privacy concerns raised by the advent of algorithmic management in this essay. Our examination of several legal and technical approaches to addressing these issues identified a number of potential paths, as well as major gaps. Design-based techniques, for example, do not enough to secure employees’ privacy at work, despite their popularity in business circles. Legal solutions are likewise inadequate, owing to the difficulties of applying jurisdiction-specific laws to a really global event. Privacy Due Diligence pledges to contribute to the bridging of these gaps. The balancing act required by employment law and data protection law between managerial prerogative and worker protection cannot be adequately accomplished through technological solutions alone, and ex post facto litigation is not an effective technique for preventing harm. Prior to the start of intrusive surveillance, interests must be weighed, and they must be examined throughout the data life cycle. The Privacy Due Diligence model develops a company-wide approach for responsible business behavior toward privacy at work by building on current models of corporate due diligence processes and combining them with critical insights from data protection, regulatory frameworks, and ethical issues.

References

Ajunwa I (2020) The “black box” at work. Big Data & Society, October. DOI: 10.1177/2053951720938093 Google Scholar

Ajunwa, I, Crawford, K, Ford (2016) Health and big data: An ethical framework for health information collection by corporate wellness programs. The Journal of Law, Medicine & Ethics: A Journal of the American Society of Law, Medicine & Ethics 44: 474–480. Google Scholar| SAGE Journals| ISI

Ajunwa, I, Crawford, K, Schultz, J (2017) Limitless worker surveillance. California Law Review 105: 735–776. Google Scholar

Astor, M (2017) Microchip implants for employees? One company says yes. The New York Times, 25 July. Available at: https://www.nytimes.com/2017/07/25/technology/microchipswisconsin-company-employees.html (accessed 17 March 2021). Google Scholar

Ball, K (2010) Workplace surveillance: An overview. Labor History 51(1): 87–106. Google Scholar| Crossref| ISI

Bărbulescu vs. Romania (5 September 2017) App. №61496/08 ECtHR. Google Scholar

Bhave, DP, Teo, LH, Dalal, RS (2019) Privacy at work: A review and a research agenda for a contested terrain. Journal of Management 46(1): 127–164. Google Scholar| SAGE Journals

Boyd, D, Crawford, K (2012) Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society 15(5): 662–679. Google Scholar| Crossref| ISI

Bradford, A (2020) The Brussels Effect: How the European Union Rules the World. Oxford: Oxford University Press. Google scholar| Crossref

Brassart Olsen, C (2020) To track or not to track? Employees’ data privacy in the age of corporate wellness, mobile health, and GDPR. International Data Privacy Law 10(3): 236–252. Google Scholar| Crossref

Buolamwini, J, Gebru, T (2018) Gender shades: Intersectional accuracy disparities in commercial gender classification. Conference on Fairness, Accountability and Transparency 81: 77–91. Google Scholar

Bygrave, L (2017) Data protection by design and by default: Deciphering the EU’s legislative requirements. Oslo Law Review 4(2): 105–120. Google Scholar| Crossref

Bygrave, L (2020) The ‘Strasbourg Effect’ on data protection in light of the ‘Brussels Effect’: Logic, mechanics and prospects. Computer Law & Security Review 40: 105460. Google Scholar

California Consumer Privacy Act (2018) AB-375, 13 September. Google Scholar

Cascio, WF, Montealegre, R (2016) How technology is changing work and organizations. Annual Review of Organizational Psychology and Organizational Behavior 3: 349–375. Google Scholar| Crossref

Cavoukian, A (2012) PbD: Origins, meaning, and prospects for assuring privacy and trust in the information era. In: Yee GOM (ed.) Privacy Protection Measures and Technologies in Business Organizations: aspects and Standards. Hershey, PA: IGI Global, pp.170–208. Google Scholar

Hong, R (2016) Soft skills and hard numbers: Gender discourse in human resources. Big Data & Society 3(2). DOI: 10.1177/2053951716674237 Google Scholar| SAGE Journals

I v. Finland (17 July 2008) App. №20511/03 ECtHR. Google Scholar

Koops, B, Leenes, R (2014) Privacy regulation cannot be hardcoded. A critical comment on the ‘PbD’ provision in data-protection law. International Review of Law, Computers & Technology 28(2): 159–171. Google Scholar| Crossref

Köpke vs. Germany (5 October 2010) App. №420/07 ECtHR. Google Scholar Neff, G, Nafus, D (2016) Self-tracking. Cambridge, MA: MIT Press. Google Scholar| Crossref

Niemitz v. Germany (1992) App. №13710/88 ECtHR, 16 December. Google Scholar

Otto, M (2019) “Workforce analytics” v fundamental rights protection in the EU in the age of big data. Comparative Labor Law and Policy Journal 40: 389–404. Google Scholar

Pasquale, F (2015) The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge: Harvard University Press. Google Scholar| Crossref

Smuha, N (2020) Beyond a human rights-based approach to AI governance: Promise, pitfalls, plea. Philosophy & Technology 33: 1–14. https://link.springer.com/article/10.1007/s13347-020-00403-w Google Scholar

Steele, C (2020) The quantified employee: How companies use Tech to track workers. PC Mag, 14 February. Available at: https://uk.pcmag.com/security-5/124891/the-quantified-employee-howcompanies-use-tech-to-track-workers Google Scholar

Wagner, B (2018) Ethics as an escape from regulation: From ethics-washing to ethics-shopping? In: Hildebrandt M (ed.) Being Profiling. Cogitas Ergo Sum. Amsterdam: Amsterdam University Press, pp.84– 89 Google Scholar

Wagner, B (2019) Liable, but not in control? Ensuring meaningful human agency in automated decision‐making systems. Policy & Internet 11(1): 104–122. Google Scholar| Crossref

Youngdahl, J (2009) Solidarity first: Labor rights are not the same as human rights. New Labor Forum 18(1): 30–37. Google Scholar

--

--

inumankhan

Business Professional | Writer | Explorer | Responsible citizen