Ensures that everyone has equal access to a space or a service.
Taxonomy
Browse every element in the ai@2026-05-06-beta schema.
Purpose
Enables artistic and/or cultural expression.
Supports immigration, asylum, or border-crossing decisions or processing — for example, border-control facial recognition, visa adjudication, asylum risk assessment.
Enables the buying and selling of goods and services.
Filters, ranks, or removes user-generated content based on policies — for example, comment moderation on a public-facing kiosk, image filtering, or hate-speech detection.
For providing food or meal services.
Supports the measurement or monitoring of the natural environment.
Supports teaching, learning, assessment, or educational guidance — for example, tutoring assistants, automated grading, or library and museum recommenders.
Determines eligibility for, or distribution of, government benefits and essential public services — for example, welfare allocation, housing assistance, child-services risk scoring.
Supports hiring, scheduling, performance evaluation, or workplace operations — for example, resume screening, shift assignment, productivity monitoring.
Reduces energy use and/or helps conserve energy.
Used for enforcement of rules or regulations.
Supports authentication or validation in order to access a space or a service.
Supports lending, insurance, fraud detection, or other financial decisions about people — for example, credit scoring, insurance underwriting, transaction-fraud flagging.
Supports services that ensure public safety and health related to emergencies.
Measures or monitors aspects of the physical environment that affect human health — air quality, water quality, radiation, noise, or workplace exposures. Distinct from Healthcare, which supports clinical care for individuals.
Supports clinical care for individuals — for example, diagnosis assistance, treatment recommendations, patient triage, or health chatbots. Distinct from Environmental Health, which monitors the physical environment.
Supports general-purpose information assistance — concierge, FAQ, signage, or guidance about a service. Distinct from Wayfinding & Services, which covers spatial orientation and routing within a place.
Supports the movements of goods or materials.
Targets advertising, personalizes recommendations, or profiles audiences for marketing — for example, in-store loyalty offers, personalized signage, behavioural ad targeting.
Supports how people and materials move around.
Supports forecasting and policy-impact analysis at the population or system level — for example, urban planning, infrastructure planning, or measuring the impact of a policy decision. Distinct from Risk Assessment & Triage, which scores individuals.
Supports a deployed research instrument — a system that studies a place, population, or environment as part of a research program. Distinct from production AI that delivers a service to the public.
Scores or prioritizes individuals or cases for further attention or action — for example, recidivism scoring, hospital triage, child-welfare risk flags. Distinct from Planning & Decision-making, which forecasts at a population or system level.
Supports preventive risk reduction in physical environments — for example, fire safety, home security, or ensuring safe passage in airports or on roads. Distinct from Enforcement (rule application) and from Fire & Emergency (incident response).
Translates speech, text, or signs to make a place or service accessible across languages — for example, real-time interpretation at a service counter, multilingual signage.
Supports the handling and disposal of waste, including as recyclables, compost and hazardous materials.
Reduces water use and/or helps conserve water.
Enables navigation of a location and its amenities and services.
Accountable
The entity that is responsible and accountable for this data collection activity
The entity that is responsible and accountable for this data collection activity
Functional Modes
Acts — plans a goal into steps and carries them out, calling tools, browsing, scheduling, or composing multi-step workflows on the user's or operator's behalf. The output is an action taken in software, not a piece of content returned. Examples: booking a meeting, filing a return, escalating a ticket, running a campaign loop.
Decides — predicts, classifies, scores, or ranks from structured data. The system reads numeric or categorical inputs and returns a label, a score, or a ranking. It does not generate new content. Examples: a credit-risk score, a fraud flag, a queue-length forecast, a recommendation rank.
Creates — produces new text, images, audio, video, or code that did not exist before this system ran. The output is content authored by the system, not a label or score about existing data. Examples: drafting a paragraph in response to a prompt, generating an image from a description, composing audio, completing a snippet of code.
Senses — turns raw camera, microphone, or scanned-document signals into structured detections, transcriptions, or extracted fields. The system reads pixels, audio, or document images and returns things the rest of the pipeline can use. Examples: detecting a vehicle in a camera frame, transcribing a spoken sentence, reading the fields off a paper form.
Moves — acts on the physical environment. The system drives motors, valves, signals, or other actuators and changes something in the world. Examples: steering a robot, changing a traffic-signal phase, opening a gate, adjusting an HVAC valve, releasing a parking-garage barrier.
Understands and remembers — pulls meaning from text or speech, finds related ideas, and grounds the system in saved context. The system reads language and returns matches, intents, summaries, or links to relevant prior knowledge. It does not generate new content. Examples: matching a customer's question to the right help article, retrieving the policy clause that applies to a case, recognizing the topic of a recorded sentence.
Risks & Mitigation
The risk that the AI system limits a person's ability to make their own decisions, control their identity, or choose freely. This includes risks of restricted access to alternatives, deceptive design that nudges decisions toward a particular outcome, and lack of meaningful consent. Mitigations may include opt-out mechanisms, transparent recommendations, defaults that protect agency, and human review of consequential decisions.
The risk that the AI system compromises fundamental human rights and civil liberties — speech, assembly, movement, due process, or freedom from arbitrary surveillance. This includes risks from mass facial recognition, predictive policing, censorship of lawful expression, and chilling effects on protest. Mitigations may include strict purpose limits, judicial or independent oversight, narrow data retention, and human-rights impact assessments before deployment.
The risk that the AI system causes environmental degradation through energy consumption, water use, e-waste, carbon emissions, or resource extraction for hardware. This includes risks from training-time compute, always-on inference, cooling demands of data centers, and rapid hardware churn. Mitigations may include energy and water efficiency targets, transparent reporting of compute footprint, hardware-lifecycle planning, and renewable-energy procurement for data-centre operations.
The risk that the AI system causes financial losses to individuals or organizational damage through erroneous pricing, denied services, fraud, or market manipulation. This includes risks from automated credit decisions, dynamic-pricing inequities, AI-enabled scams, and supply-chain disruption from misuse. Mitigations may include affordability guards, fairness audits of pricing models, fraud-detection layers, and clear redress procedures.
The risk that the AI system's outputs lead to physical injury to individuals or damage to property. This includes risks from autonomous vehicles, robotic actuation, faulty navigation guidance, or critical-infrastructure errors that put bodies or property in harm's way. Mitigations may include rigorous safety testing, fail-safe defaults, geofencing of hazardous behavior, and human oversight of high-stakes physical actions.
The risk that the AI system manipulates political discourse, interferes with elections, concentrates market power, or damages public institutions. This includes risks of synthetic media targeting voters, opaque ad-targeting, competitive distortion from monopoly access to data and compute, and erosion of institutional trust. Mitigations may include disclosure of synthetic content, antitrust attention to AI markets, audit access for regulators, and political-ad transparency requirements.
The risk that the AI system causes emotional or mental-health impairment, directly or indirectly. This includes risks from addictive feedback loops, distress caused by surveillance, harassment via generated content, and anxiety from automated denials. Mitigations may include content warnings, well-being safeguards, age-appropriate design, accessible support channels, and rate limits on engagement-maximizing behavior.
The risk that the AI system damages the reputation of individuals, groups, or organizations through misidentification, false categorization, or stigmatizing labels. This includes risks of inaccurate face recognition, defamatory generated content, and unfair public scoring that follow people across contexts. Mitigations may include accuracy thresholds before public-facing labels are applied, human review of high-impact identifications, takedown procedures, and clear rights to correction.
The risk that the AI system harms communities or culture through erosion of trust, loss of linguistic and cultural diversity, or unhealthy dependency on opaque systems. This includes risks of misinformation at scale, homogenization of cultural expression, replacement of local knowledge, and degradation of public-information ecosystems. Mitigations may include multilingual support, partnerships with affected communities, provenance signals on generated content, and investment in local-language and local-context models.
Rights
The right to request and receive information about what personal data an AI system has collected about you, how this data is being used, and what decisions have been made using this information. This includes the right to obtain a copy of your data in a readable format.
The right to be informed in plain language about how an AI system works in general — the logic involved, the significance of the processing, and the likely consequences for the people it affects. This is the right to understand the system itself; the right to an explanation of a specific decision made about you is a separate right.
The right to have your personal data deleted from an AI system when the data is no longer needed for its original purpose, when you withdraw consent, or when there is no legitimate interest in continuing to process it. Where technically feasible, this also includes having the system "unlearn" information derived from your data. This is a right of deletion; the right to stop processing without deletion is a separate right.
The right to challenge a decision an AI system has made about you — to provide additional information, express your point of view, and have the decision reconsidered based on your input. This is a remedy after a decision has been made; the right to a human review before a decision is acted on is a separate right.
The right to have inaccurate or incomplete data that an AI system holds about you corrected or completed. This is separate from the right to access your data (which is the right to read it) and the right to be forgotten (which is the right to delete it). When the original data is corrected, downstream decisions made from the faulty data should be revisited.
The right to a clear and meaningful explanation of how an AI system contributed to a specific decision affecting you — what role the system played, the main factors that led to the outcome, and what it means for you. This right covers the decision actually made about you; the right to be informed about how the system works in general is a separate right.
The right to be free from discriminatory treatment by AI systems based on protected characteristics such as race, gender, age, religion, disability, or sexual orientation. Organizations must implement and demonstrate appropriate technical and organizational measures to prevent discriminatory outcomes from their AI systems.
The right to ask an organization to stop using your personal data, even when the data itself is not deleted. This is separate from the right to be forgotten (which deletes the data) and from the right to purpose limitation (which constrains the original stated purposes). It is the user-invoked stop-now affordance — your data may stay on file, but no further processing should happen against it.
The right that your data is only used for the specific purposes that were stated when it was collected. The organization must declare and document those purposes before collection begins, and is barred from later using your data for new, unrelated purposes without your knowledge or consent. This is an upstream limit on what use is ever allowed; the right to stop processing already underway is a separate right.
The right to have a person — not an automated system — make or meaningfully reconsider a decision affecting you. This is a path to a non-automated decision, separate from the right to challenge a decision that has already been made. The reviewing person must have the authority and information to actually change the outcome.
The right to know that an AI system is in operation before any decision affecting you is made. This includes being told when you are interacting with an automated system rather than a person, when content is AI-generated, and when biometric or emotion-recognition technology is in use. This is the precondition for every other right in this category — you cannot exercise rights you do not know apply to you.
Input Dataset
Sensor readings — air quality, temperature, sound level, energy, water flow. The AI takes in numeric measurements from physical sensors, usually with no person attached. Examples: PM2.5 readings from an air-quality monitor, decibel levels from a sound sensor, kilowatt-hours from a meter.
Locations, routes, surroundings, or floor plans. The AI takes in data describing where something is. Examples: GPS coordinates from a phone, foot-traffic counts on a sidewalk, a map of building corridors.
Taps, choices, paths walked, dwell time, purchases, or queries. The AI takes in records of what people did. Examples: tapped destinations on a wayfinding kiosk, items added to a cart, a search query typed into a public terminal.
Faces, fingerprints, voice prints, gait, gestures, gaze, posture. The AI takes in biological signals from a person's body. Examples: a face captured at a turnstile, a voice recorded by a kiosk microphone, gait analyzed by an overhead camera.
An eligibility, classification, ranking, or yes/no that another system already made about a person. The AI takes in this prior decision and uses it as input. Examples: a credit score from another model, an access-allow flag from upstream, a triage class assigned earlier in the pipeline.
Text, images, audio, or video that another AI made. The AI takes in this synthetic content and processes it further. Examples: an LLM-written summary fed into a translator, a generated photo passed to a moderation model, a text-to-speech clip routed to an audio system.
Schedules, routes, budgets, occupancy counts, public records, or other administrative data — not about any one person. The AI takes in data describing how a place or service runs. Examples: school occupancy by hour, trash-collection routes, a transit timetable, a department budget.
A signal that something in the physical world changed — a door opening, a light turning on, an alert sounding. The AI takes in evidence of these changes from upstream actuators or sensors. Examples: a turnstile-unlock event, an HVAC adjustment recorded by a sensor, a public-address alert played upstream.
A suggestion, forecast, risk score, or ranking produced earlier by another system. The AI takes in this advisory output and uses it. Distinct from a binding decision because it advises rather than determines. Examples: a forecasted demand curve, a risk score from a prior model, a recommended route from a navigation service.
Health, finances, beliefs, sexuality, immigration status, or other personal information that carries legal and social risk. The AI takes in this kind of protected data. Examples: insurance eligibility at a clinic kiosk, a benefits-application status, a self-declared pronoun.
Algorithm or Model
AI systems that infer emotional state, mood, or sentiment from text, speech, facial expressions, or physiological signals. Used to gauge crowd sentiment, evaluate service experience, or screen for distress. The accuracy and cultural fairness of affect inference is contested, and some jurisdictions restrict its use in workplaces and public services.
AI systems that flag unusual events, patterns, or values that depart from a learned baseline of normal behavior. Used for equipment-failure prediction, fraud screening, intrusion detection, water-leak monitoring, and unusual crowd-flow alerts. Outputs are typically thresholded scores rather than direct decisions.
AI systems that identify or verify a specific person from biological signals — face, fingerprint, voice, iris, or gait. Distinct from generic computer vision because the output is tied to a named individual. Many jurisdictions restrict or prohibit biometric recognition in publicly accessible spaces.
AI systems that assign a label to an input or estimate a numeric outcome — including classifiers (spam vs. not spam, risk tier), regressors (predicted demand, expected wait), and time-series forecasters (next-hour ridership, energy load). Built from labeled historical data using statistical or machine-learning models.
AI systems that group people, places, or events into clusters based on similarity, without using pre-defined labels. Used to segment audiences, mobility patterns, energy-use profiles, or service-demand zones. Outputs are cluster assignments that downstream systems may treat as categories — even though clusters are inferred, not authored.
AI systems that interpret images or video — counting people, detecting objects, reading text in pictures, tracking motion, or segmenting scenes. Common in occupancy sensing, traffic and crowd monitoring, accessibility assistance, and signage inspection. Distinct from biometric recognition, which identifies specific individuals.
AI systems that read, write, summarize, translate, or hold a conversation in human language. Includes large language models (LLMs), chatbots, and generative-text systems. They work by predicting the most likely next tokens given the input and any prior context.
AI systems that search for the best assignment, route, schedule, or allocation under constraints. Includes vehicle routing, transit signal timing, energy and HVAC scheduling, staff rostering, and resource allocation. Works by exploring possibilities to maximize or minimize a stated objective such as cost, time, or emissions.
AI systems whose primary job is to transform input data so identifying details are removed, blurred, or replaced before any downstream use. Includes face blurring, license-plate masking, tokenization, k-anonymization, differential privacy, and synthetic-data generation. Acts as a barrier between raw collected data and the data used for analysis or decisions.
AI systems that suggest or order items for a person — services, listings, news, wayfinding options — based on preferences, behavior, or similarity to other people. Includes collaborative filtering, content-based recommenders, and learned ranking. Personalizes what each person sees from the same underlying catalog.
AI systems that find and surface relevant content from a body of documents, records, or media in response to a query. Includes semantic search, vector retrieval, and retrieval-augmented generation (RAG) pipelines that feed relevant passages into a language model. Used in document QA, public-records lookup, and library or archive search.
AI systems that convert between spoken words and other formats, or that classify non-speech sound. Includes speech-to-text (transcription), text-to-speech (synthetic voice), keyword spotting, and audio event detection. Voice-as-identity (recognizing who is speaking) belongs in Biometric Recognition.
Output Dataset
The AI produces numeric measurements over time — sensor outputs, derived quality scores, forecasted values. Examples: an hourly air-quality score for a plaza, a predicted ridership curve for tomorrow, an aggregated noise-level reading published to a public dashboard.
The AI produces location, route, region, or floor-plan output. Examples: a wayfinding route shown on a kiosk, a heatmap of foot-traffic density, a geofence boundary published to downstream systems.
The AI produces records of what people did — paths walked, items selected, dwell times, interaction patterns. Examples: a weekly report of most-visited destinations, a session log of touchscreen interactions, an inferred route someone took through a venue.
The AI produces output containing biological signals from a person's body — face templates, voice embeddings, gait vectors, gesture sequences. Examples: an enrolled face template stored downstream, a voice embedding for later matching, a gait signature exported to another system.
The AI produces a binding determination about a person — eligibility, classification, ranking, or yes/no. Examples: an access-control system deciding to open a gate, an automated benefits eligibility result, an automated screening that rules someone out.
The AI produces new content — text, images, audio, or video that did not exist before. Examples: an LLM-written service advisory shown on a sign, a synthesized voice announcement played over a public-address system, a generated illustration on a digital kiosk.
The AI produces administrative output — schedules, routes, occupancy estimates, budget allocations, or other records about how a place or service runs. Not about any one person. Examples: an optimized trash-collection route, a school occupancy projection, a recommended budget allocation for next quarter.
The AI triggers something in the physical world — a door unlocking, a light turning on, an alert sounding, signage changing, HVAC adjusting. Examples: a turnstile that opens on facial match, a public-address alert played automatically, an HVAC adjustment commanded by a building model.
The AI produces an advisory output — a suggestion, forecast, risk score, or ranking. Distinct from a binding decision because it advises rather than determines. Examples: a signage system suggesting an alternate route based on predicted crowding, a forecasted demand curve for tomorrow, a recommended next destination on a kiosk.
The AI produces output about health, finances, beliefs, sexuality, immigration status, or other categories that carry legal and social risk. Examples: a triage priority assigned at a clinic, an eligibility result sent to a benefits portal, an inferred protected attribute exported downstream.
Access
The data collected may be resold to other 3rd parties
Data is available to 3rd parties not involved in the data activity. This does not always mean that data is being resold.
Data that can be accessed and downloaded online, either for free or for a fee
Available to me but not to other individuals. For example, as an individual you have access to all your electronic toll records for your car, but other individuals do not have access to that.
Data is available to the accountable organization
Data is available to the data collection or technology provider
Not available to me or other individuals. As an individual, there isn't a way for you to access this data.
Data is not available to the accountable organization
Data is not available to the data collection or technology provider.
Retention
Data is stored for {{duration}}, and after this time period is deleted
No data is kept or stored
Data Storage
Data is backed up outside the jurisdiction where it was collected.
Data is backed up with the jurisdiction where it was collected.
Data is stored in the jurisdiction where it was collected.
Data is stored on behalf of the organization or the data collector in an off-site data centre, such as Amazon Web Services, Google Cloud and Microsoft Azure
Data is stored outside the jurisdiction where it was collected.
Data is stored mainly in the jurisdiction where it was collected.