Two items relating to the environment for employees and gig workers created by automation and use of algorithms. The first concerns legal protections and the second concerns reactions by workers. Reacting to the second, I again bring up the idea of apps or programs for individuals that could interface with systems used by companies. Could such a technical response also link with efforts to improve worker protection laws?
“Worker-protection laws aren’t ready for an automated future,” by Jeffrey Hirsch, The Conversation, 28 August 2019 (posted August 2019)
In the wake of increasing use of “AI” by employers, Prof. Jeffrey Hirsch calls attention to the need to update legal protections for employees.
“What People Hate About Being Managed by Algorithms, According to a Study of Uber Drivers,” by Mareike Möhlmann & Ola Henfridsson, Harvard Business Review, 30 August 2019 (posted August 2019)
Interesting article on problems with “algorithmic management” in gig enterprises like Uber from the point of view of workers, & the latters’ responses to it. The authors suggest 4 ways to address these issues:
* sharing info (towards “transparency”);
* invite feedback (making communication more 2-way);
* build in human contact (reducing isolation); &
* build trust.
Could there be a 5th way, that would help with the other 4 & do more for the workers? Since their jobs are technically contract work & not employment, why not in effect develop algorithms for them as independent contractors? IOW, apps that interface with the company’s system (or companies’ systems, since in the case of gig drivers they often drive for >1), calculate advantageous courses of action for them, & communicate selectively with apps of other workers (taking the “union-type organizations” mentioned in the article up a level).
Most tech development serves corporations. Might complementary development of intelligent apps for workers not only help to level the field, but also to increase the dimensions of data processed/analyzed, & efficiency of the overall system?