Systems are interjected into core infrastructure without seeking accompanying citizen questioning about their effectiveness or social groups on which they’ll be deployed. Often considered public-private partnerships for citizen protection, software systems gather huge amounts of data in a vertical hierarchy that isn’t open to public scrutiny. The lack of data privacy and transparency is unsettling at best, a breach of individual rights at worst. Will such surveillance be used in US smart cities to monitor political and social behavior for a perceived common good? What happens when governments can track huge numbers of people using such software systems?
That last part is quite unnerving. Facial recognition as advanced behavioral analysis. Iris scanners. Instant detection for a particular pre-defined action or activity in front of a camera. What happens when an algorithm identifies and targets a group of individuals? When does a technical capacity become an ethical issue?