Designing Ethical Microtask Platforms for Robot Training: A Guide for Product and Engineering Teams
A product-and-engineering checklist for ethical microtask platforms: safety, pay transparency, data quality, and auditability.
Designing Ethical Microtask Platforms for Robot Training: Why Product Choices Matter
Microtask platforms are no longer just “labeling engines” for images and text. They increasingly power robot training, humanoid manipulation demos, embodied AI evaluation, and ML datasets that shape how real-world systems behave in homes, hospitals, warehouses, and public spaces. That means platform design decisions now affect worker safety, data quality, pay fairness, compliance exposure, and downstream model risk all at once. If you’re a product manager or engineer building in this space, you are not merely shipping a task marketplace; you are designing the operating system for human-in-the-loop intelligence.
The current wave of robot training also changes the labor model. A task that used to be “click whether there’s a cat in the frame” can now require synchronized video capture, motion imitation, time-stamped event logging, and repeated iterations that expose workers to cognitive fatigue or physical strain. That is why teams should study adjacent operational patterns in labor signals before the next hire, and borrow the rigor used in secure synthetic presenter SDKs and safe AI prototype logging frameworks. The lesson is simple: ethics should be engineered as product infrastructure, not layered on later as policy copy.
In this guide, we’ll turn that principle into an actionable checklist for product and engineering teams. We’ll cover worker safety, pay transparency, task design, quality control, auditability, and the compliance guardrails needed to build trustworthy ML datasets. You’ll also get a practical comparison table, implementation tips, and a rollout sequence you can use before launch. For teams already operating at scale, this is also a good moment to revisit workflow automation choices by growth stage and align them with the expectations of a modern microtask marketplace.
1) Start With the Risk Model: What Kind of Microtask Platform Are You Building?
Training data platform, labor marketplace, or robot ops layer?
Not all microtask platforms are ethically or operationally identical. Some are general-purpose crowdsourcing tools; others are purpose-built for robot training, teleoperation, simulation validation, or edge-case data collection. Your risk profile changes depending on whether workers are annotating autonomous driving footage, recording household movements for humanoid control, or reviewing safety-critical outputs for industrial robotics. Before writing a single user story, product teams should define the exact usage model, because the safety policy, compensation model, and audit log all depend on it.
A useful framework is to classify tasks by physical burden, cognitive burden, privacy sensitivity, and downstream harm potential. A basic image label might be low-risk, while a motion-capture task in a worker’s home could create medium privacy exposure and high quality consequences if mislabeled. For a deeper analogy, think about how regulated workflows are handled in HIPAA, legal, and financial scanning basics: the same “scan a document” action can be trivial or highly sensitive depending on the records involved. Your task taxonomy should be equally explicit.
Define the platform’s ethical boundaries early
Product teams often try to solve ethics with policies like “we don’t allow dangerous tasks.” That is too vague to ship. Instead, specify hard boundaries: no tasks requiring unsafe physical activity, no ambiguous consent for home-recorded data, no hidden bonus systems that distort pay, and no data retention beyond the intended dataset lifecycle. If your platform supports sensitive capture or identity-linked workflows, it should borrow the discipline seen in zero-trust healthcare deployment patterns and secure document workflow selection.
Teams that define boundaries upfront usually move faster later. They spend less time on trust-and-safety escalations, less time rewriting worker support macros, and less time defending quality anomalies to customers. That clarity also makes legal review easier, since compliance teams can review a concrete operating model rather than a vague “AI labor platform.” If you are building in a highly competitive marketplace, this clarity becomes a differentiator, much like the operational discipline described in AI-powered talent identification systems.
Checklist: the first 10 decisions to lock down
Before launch, confirm the answers to these ten questions: What exact behaviors are workers producing? Is the worker in a private, public, or employer-owned environment? Is the task synchronous or asynchronous? Is biometric, voice, or location data involved? What is the maximum task duration? Can workers pause safely? Who owns the raw media? What gets deleted and when? What constitutes acceptable quality? What is the appeal path for contested task outcomes? If your team cannot answer these clearly, the platform is not ready for real-world data collection.
2) Build Worker Safety Into the Task Lifecycle, Not the Help Center
Safety begins with task design
Worker safety is not just about banning dangerous activities. It starts with the way a task is framed, sequenced, and timed. A platform asking someone to mimic arm motions for robot training must account for posture fatigue, repetitive strain, camera placement, and the worker’s physical environment. Safety should appear in the UI as a constraint, not as a warning buried in documentation. If a task could be hazardous, the interface should force a pre-task safety check, require environmental confirmation, and explain how to stop immediately without penalty.
This is where UX matters. Good safety UX resembles the care taken in gym compliance and record-keeping systems: you do not hope users “just know” what’s safe; you structure behavior through forms, prompts, and logs. Use posture reminders, maximum session timers, cooldowns between repetition blocks, and visual examples of safe setups. For home-based humanoid training, consider whether workers need a stable stand, proper lighting, and minimum floor space before a task can even begin.
Protect against hidden labor harm
Some risks are not visible in a simple task rating system. For example, a task may look low-effort but still create mental fatigue if it requires repeated judgment under uncertainty. Others may create anxiety if a worker fears they will be penalized for pausing during a household recording session. Product teams should instrument time-on-task, break frequency, error correction load, and “frustration events” like repeated redo requests. These metrics are similar in spirit to how a robust platform tracks operational friction in fail-safe systems: the point is to detect failure before users are harmed.
Use escalation rules when the task pattern suggests burden beyond original expectations. If workers consistently take longer than modeled, or if support tickets mention discomfort, your platform should auto-reclassify the task and require review. That review should include both product and trust-and-safety ownership. A platform that ignores this data will eventually create quality collapse, worker churn, and reputational damage that no compensation tweak can repair.
Support, consent, and worker autonomy
Ethical microtask platforms must make refusal easy. Workers should be able to skip tasks, pause sessions, ask for clarification, or leave without losing access to future work. Consent is not meaningful if the alternative is a hidden penalty. Make opt-in language specific and time-bound, and avoid broad catch-all permissions for “future model training.” When the task involves a home setting or personal device, consent should be revisited before every new data modality.
Pro Tip: If a task needs a disclaimer longer than the actual task instructions, your workflow is probably too risky or too poorly scoped to launch.
3) Engineer Pay Transparency Like a Core Product Feature
Workers need predictable earnings, not mysterious incentives
Pay transparency is one of the strongest signals of platform trust. Workers should be able to estimate earnings before accepting a task, and they should know exactly how bonuses, retries, rejections, and time estimates affect total payout. Avoid opaque “performance boosts” that only appear after completion, because they create distrust and can distort behavior. A good microtask platform tells the worker: expected time, base pay, bonus conditions, rejection risk, and payout timing.
Think of this as the labor equivalent of a budget calculator. Just as travelers use true trip budgets to avoid hidden fees, workers need a transparent earnings model to understand the real economics of their time. The platform should calculate effective hourly rates using the estimated completion time, not just per-task labels. If the task includes setup overhead, such as recording equipment setup or environment staging, that time belongs in compensation math too.
Design compensation around effort, complexity, and risk
Not every task should pay the same way. A simple consensus label can be flat-rate paid, but a robotic imitation task with setup steps, retries, and physical rehearsal should have a richer pay model. Product teams can segment compensation into base time, calibration time, review time, and hazard premium. That structure is especially important when workers are expected to handle media capture, repeated gestures, or location-specific conditions. For high-variability tasks, use dynamic estimates but keep the formula visible.
A helpful analogy is how marketplaces handle volatility and slippage in other industries. Just as checkout flows can reduce surprises during market swings in slippage-aware checkout design, your platform should reduce wage surprise with payout previews and task commitment locks. Workers should not discover after submission that the task took twice as long as promised. If your model can’t forecast payout accurately, your system should display confidence intervals and recommend a lower-risk acceptance band.
Build pay audits into product analytics
Product teams need to monitor whether predicted pay matches actual realized pay. Measure median earnings per active hour, time-to-cashout, rejection-related wage loss, and compensation variance by task family. If a task category routinely pays below your target floor after retries and friction, flag it automatically. This is not only an ethics issue; it is a retention and supply reliability issue. The best workers leave first when pay is unclear, underestimating the long-term cost of “cheap” labor.
For inspiration, teams can borrow the reporting rigor seen in financial wellness dashboards for engineering teams, where transparency and forecasting improve decision-making. Apply the same logic to worker earnings. Let workers see historical earnings by task type, current queue demand, and projected payout timelines. A platform that treats pay as a first-class UX object earns more trust than one that forces workers to infer economics from trial and error.
4) Treat Data Quality as a Measurable Product Surface
Quality is not just “accuracy”
Many teams reduce data quality to agreement rates or gold-label accuracy. That is insufficient for robot training. High-quality ML datasets require consistency, completeness, temporal alignment, annotation rationale, and task-specific realism. A robot training sample may be technically correct yet still unusable if the movement is jerky, the camera angle obscures the object, or the environment doesn’t match the deployment context. Product teams need a data quality model that reflects the actual use case, not a generic annotation score.
Strong teams define quality at the level of task intent. For example, a humanoid manipulation dataset may need pose continuity, grasp visibility, object identity consistency, and environment diversity. A task that produces excellent consensus but poor motion fidelity can poison training. That is why careful dataset design matters, similar to the way researchers build reliable observational records in lunar observation datasets, where context and provenance are as important as the raw observation.
Track the right metrics for the task type
Your quality dashboard should be task-family specific. For classification tasks, use precision, recall, and inter-annotator agreement. For video-based robot training, add temporal consistency, frame completeness, motion smoothness, and instruction adherence. For capture tasks, measure failed submissions, restart rate, environment compliance, and subjective clarity of instructions. This prevents a common mistake: shipping a single global accuracy metric that obscures task-specific failure modes.
Use a table like the one below to align teams on metric ownership and risk.
| Task type | Primary quality metric | Worker risk | System control | Audit signal |
|---|---|---|---|---|
| Image labeling | Agreement rate | Low | Gold tasks, redundancy | Reviewer variance |
| Text review | Precision/recall | Low | Guideline clarity | Dispute rate |
| Video capture | Frame completeness | Medium | Preflight checks | Redo frequency |
| Robot imitation | Motion fidelity | Medium-High | Pose validation | Time-series drift |
| Safety judgment | Calibration score | High | Escalation rules | Human review log |
Use redundancy intelligently, not wastefully
Redundancy can improve quality, but it also burns worker time and money. The challenge is to use duplication only where uncertainty warrants it. Build confidence-scoring that routes ambiguous tasks to multiple workers, while clear-cut tasks go through a single pass with spot checks. This mirrors good recommendation systems, where the platform avoids over-processing obvious cases and focuses compute on uncertain ones, much like the logic discussed in recommendation engine design.
Combine redundancy with worker skill profiling and calibration history. If certain workers consistently perform well on pose-based capture or edge-case classification, route more challenging tasks to them and pay accordingly. Quality then becomes a managed workflow, not a blunt post-hoc filter. That is how mature microtask platforms avoid the false tradeoff between high quality and humane labor design.
5) Make Auditability a Built-In Technical Requirement
Every dataset should have a provenance trail
Auditability is what separates a trustworthy ML dataset from an opaque one. Every task should have a traceable lineage: who created it, who viewed it, who completed it, what instructions were shown, which validation checks passed, what revisions were made, and what version of the dataset it entered. This matters for compliance, incident review, model debugging, and customer trust. If you cannot explain where a training sample came from, you cannot credibly defend how the model was trained.
Borrow from systems that require secure records and document lifecycle control. A strong reference point is secure document workflow design, where access, versioning, and retention are explicit. Your platform should implement immutable event logs for task publication, worker assignment, submission, review, payment, rejection, appeal, and deletion. Ideally, the data model supports dataset snapshots, so customers can reproduce a training set version months later.
Separate operational logs from personal data
Auditability does not require exposing worker identity everywhere. In fact, good design usually means the opposite: strong internal traceability with minimized unnecessary exposure. Use pseudonymous worker identifiers for analytics, and keep identity resolution behind role-based access controls. This limits privacy risk while preserving the ability to investigate fraud, errors, or harassment. For highly sensitive programs, consider approaches similar to zero-trust architecture, where access is segmented, logged, and narrowly scoped.
Also define retention windows for logs, raw media, and derived annotations. The retention policy should vary by data type and customer contract. A platform that stores everything forever creates compliance risk, while a platform that deletes too aggressively destroys auditability. The answer is disciplined lifecycle management, not maximal retention or maximal deletion.
Support review, dispute, and replay
Auditability becomes real when you can replay the entire decision path. If a worker contests a rejection, the platform should show the original task instructions, submitted content, model or reviewer feedback, and decision timestamps. If a customer asks why a dataset segment was excluded, the system should surface the rule that triggered the exclusion. This creates consistency and reduces arbitrary decision-making.
Teams building higher-stakes systems can adopt logging principles similar to those used in health triage AI prototypes, where the key is knowing what to log, block, and escalate. For microtask platforms, the same principle applies: log enough to reconstruct decisions, block unsafe or unauthorized actions, and escalate cases that exceed predefined thresholds. That is what makes an audit trail useful rather than decorative.
6) Build Compliance as a Product Workflow, Not a Legal Afterthought
Classify data by privacy and regulatory impact
Compliance starts with classification. Is the task collecting personal data, biometric data, household video, voice recordings, location metadata, or potentially regulated content? Each category implies different policies for consent, storage, access, transfer, and deletion. Product managers should map every task type to a compliance tier before it goes live. Do not wait until a customer asks for a DPA or a security questionnaire to discover that your workflow is under-specified.
For teams handling potentially sensitive records, the thinking should resemble the discipline in regulated document scanning. In that world, metadata, file handling, and storage boundaries matter as much as the scan itself. On a microtask platform, the equivalent is knowing whether a worker’s recording includes faces, home interiors, device serial numbers, children, or bystanders. Compliance becomes much simpler when you know exactly what category of data each task produces.
Build consent and disclosure into the task flow
Consent should be specific, visible, and revocable within platform rules. Workers need to know what data is captured, who will receive it, and how it may be used. Customers also need to know whether their task design creates hidden legal obligations, such as special handling for biometric or household data. The UI should present these disclosures before task acceptance, not after completion.
Where relevant, support separate consent layers for data capture, model training, QA review, and external sharing. This avoids a one-size-fits-all agreement that is too vague to be meaningful. Product teams can further reduce risk by using safe defaults: minimal data capture, short retention, and no secondary use unless explicitly approved. In highly distributed environments, take cues from least-privilege security models and apply them to user permissions and data scopes.
Plan for enterprise security reviews early
If your buyers are enterprise ML teams, they will ask about SOC-style controls, retention, access logging, and subcontractors. The platform should be able to answer those questions with product evidence, not slideware. Create admin dashboards that show audit logs, dataset lineage, retention configuration, and worker jurisdiction if applicable. Security and compliance become sales enablers when they are discoverable in the product, not just in a policy PDF.
Teams that can demonstrate strong controls are also better positioned to win regulated or procurement-heavy deals. That is a familiar pattern in other sectors too, where buyers increasingly prefer vendors with visible controls and a clean review process, as seen in broker-switch due diligence and proof-over-promise audit frameworks. In microtask platforms, trust is a feature, not a brochure.
7) Design the Worker Experience Like a Real Product, Not a Data Tap
Instruction clarity is the first conversion metric
Worker UX is often treated as secondary to customer-facing features, but it is central to both supply quality and ethics. Instructions should be concise, visual, layered, and tested with real workers. If a task requires a worker to capture movement in a certain order, the steps should be previewed with examples, failure states, and a clear explanation of what “good” looks like. Hidden complexity produces bad data and frustrated workers.
Good UX patterns use progressive disclosure: show the essentials first, then deeper detail as needed. That approach is similar to how creators repurpose content in fast editing workflows—you reduce friction by making the core action easy, then provide controls for advanced users. In microtask platforms, advanced workers may appreciate keyboard shortcuts, bulk actions, and saved settings, but everyone benefits from plain language and fewer ambiguous steps.
Accessibility and device constraints are part of ethics
Not every worker has a high-end phone, stable broadband, or a quiet room. If your platform assumes ideal conditions, you silently exclude many capable workers and create biased dataset outcomes. Design for low-bandwidth modes, resumable uploads, and mobile-first capture where possible. Provide clear device requirements, but avoid overfitting to expensive hardware unless the task truly requires it.
This is especially relevant for global labor pools. Workers may rely on older devices or shared spaces, and your platform must behave gracefully under those constraints. Teams can learn from practical device procurement guidance like accessory bundling for device fleets, where small interface and hardware decisions determine operational success. In the microtask context, the “accessory” might be a tripod requirement, a lighting guide, or a low-data upload mode.
Respect the worker as a collaborator
The best platforms give workers agency and feedback. That means showing why a task was rejected, how to improve, and how their work contributes to the dataset. When workers understand the purpose, they are more likely to comply with instructions and less likely to churn. The emotional difference between “you are a replaceable data source” and “you are a contributor to a safety-critical system” is substantial, and it affects performance.
For product teams, this is not sentimental language; it is operational strategy. Respectful UX improves retention, reduces support load, and makes quality more predictable. In competitive labor markets, a better worker experience can become a moat as real as pricing or feature depth.
8) Create Governance, Metrics, and Escalation Paths That Actually Work
Use a governance dashboard with decision ownership
Every ethical platform needs a governance loop: who reviews policy exceptions, who approves new task categories, and who signs off on incidents. Create a dashboard that shows task risk tier, pay distribution, rejection rate, appeal outcomes, privacy flags, and unresolved worker complaints. Without a clear owner, issues linger. With clear ownership, product, engineering, operations, and legal can move from reactive firefighting to structured review.
Borrow the mindset of capability matrices: the goal is to visualize who can do what, where the gaps are, and what needs attention next. Add a weekly ethics review and a monthly dataset quality review. Keep the meetings short, but make the metrics non-negotiable. If a task family shows worsening pay variance or higher-than-normal disputes, freeze scaling until the root cause is understood.
Define escalation triggers before launch
Escalation rules should be numeric and clear. Examples include: redo rate above a threshold, median task time above forecast by 30%, repeated worker reports of discomfort, region-based consent ambiguity, or a spike in rejected tasks for a single instruction variant. The point of thresholds is not bureaucracy; it is to prevent harm from becoming normalized. If your platform waits for a customer complaint to act, you are already behind.
A good rule of thumb is to escalate by task family, not just globally. A robot training capture task can fail for reasons that don’t appear in text labeling. That means your alerting needs semantic context, not generic SLA dashboards. If a task involves physical motion or home capture, treat rising redo rates as both a quality issue and a possible worker safety signal.
Incident response should include workers
When things go wrong, the people doing the work should not be the last to know. If a task format is flawed, workers should receive a clear notice, compensation guidance, and a correction plan. If a dataset is recalled, the platform should explain what happened and what will change. This is how you build trust in a labor marketplace that asks people to contribute valuable, real-world data.
The best analogy here is operational recovery in mature systems where failures are expected and contained, not denied. Whether it’s a vehicle workflow, a healthcare integration, or a human training platform, the platform should preserve evidence, halt risky activity, and restore normal operations only after review. That is what separates serious infrastructure from a simple gig app.
9) A Practical Launch Checklist for Product and Engineering Teams
Pre-launch checklist
Use this checklist before shipping a new task family or customer workflow. First, classify the task by physical, cognitive, and privacy risk. Second, define the minimum safe environment and make it visible in the task UI. Third, publish a transparent pay estimate with a stated formula. Fourth, specify quality metrics that match the task type. Fifth, ensure all task actions are captured in immutable logs. Sixth, assign an owner for support, appeals, and incident response. Seventh, set retention and deletion rules. Eighth, test the workflow with a small worker cohort before scaling. Ninth, document fallback paths when the task fails. Tenth, confirm that customer and worker disclosures match the actual data flow.
It can help to mirror the rigor used when companies build portfolio case studies: define the problem, the system, the proof, and the outcome. Your internal launch docs should tell the same story. If you cannot explain the task in those terms, you probably have not thought through the risk model deeply enough.
Engineering checklist
On the engineering side, prioritize typed schemas for tasks and events, role-based access control, a versioned instruction system, and dataset snapshot exports. Build server-side enforcement for pay formulas so compensation cannot be altered by frontend hacks or inconsistent clients. Implement replayable audit logs and exportable incident reports. Add observability for worker errors, latency, drop-off, and task rejection. And treat mobile performance as a core reliability requirement, not a nice-to-have.
Engineering should also own safe defaults. No task should become public without risk metadata, pay metadata, and a designated reviewer. No dataset should be exportable without lineage metadata. No worker should be able to enter a risky task family without seeing the relevant safety disclosures. The more of these controls that live in code rather than manual process, the more scalable and trustworthy the platform becomes.
Operating checklist
Operations should run regular reviews on worker complaints, payment anomalies, customer disputes, and data quality drift. Pair those reviews with targeted interviews of active workers, especially in new regions or task families. For robot training data, small changes in instructions or device mix can have outsized effects on quality. Treat these operations sessions as product discovery, not just support hygiene.
Pro Tip: If you only review ethics after a customer escalation, your system is already designed for failure. Review it like a release-critical metric, not a policy exception.
10) FAQ: Common Questions About Ethical Microtask Platform Design
What is the most important ethical control for a microtask platform?
The most important control is likely the combination of pay transparency and task risk classification. If workers do not understand what a task pays and what it demands, every other ethical safeguard becomes weaker. Transparent pricing, clear instructions, and safe task boundaries prevent a large share of harm before it starts.
How do we measure data quality for robot training tasks?
Use task-specific metrics rather than a single global score. For robot training, that may include motion fidelity, temporal continuity, frame completeness, instruction adherence, and redo rate. The best metric set depends on the downstream model goal and the kind of behavior the robot is expected to learn.
Should workers be allowed to skip tasks without penalty?
Yes, ideally. Workers should be able to decline tasks that are unsafe, unclear, or too burdensome. If skipping leads to reduced access or hidden penalties, the consent model is weakened and the platform may create perverse incentives that harm both quality and trust.
What should be included in an audit trail?
Include task creation, instruction version, assignment, submission, review, payment, rejection, appeal, dataset export, and deletion. The trail should be replayable and tied to a dataset snapshot so customers and internal teams can reconstruct what happened. That is essential for debugging, compliance, and trust.
How do we make sure pay remains fair as tasks scale?
Track predicted versus realized earnings, rejection-related pay loss, and earnings variance across task families and geographies. If a category drifts below your intended floor, reprice it or redesign the workflow. Fair pay is not a static policy; it is an operational metric that needs continuous monitoring.
Do we need special compliance controls for home-based video or motion tasks?
Usually yes. Home-based media can capture faces, interior spaces, location clues, and other sensitive information. You should classify the data, narrow consent, limit retention, and restrict access based on the actual privacy risk. When in doubt, treat the capture flow as sensitive by default.
Conclusion: Ethical Microtask Platforms Are Built, Not Declared
Building ethical microtask platforms for robot training is not a branding exercise. It is a product, engineering, and operations discipline that shapes dataset quality, worker trust, enterprise readiness, and long-term platform survival. The teams that win in this market will be the ones that treat worker safety, pay transparency, and auditability as core system requirements rather than content afterthoughts. They will also be the teams that design for real human behavior, not idealized task completion.
If you are shipping a new platform or redesigning an existing one, start with the checklist in this guide and audit every task family against it. Review your risk model, instrument quality metrics, expose pay clearly, log everything that matters, and make it easy for workers to understand and trust the system. Then iterate with real worker feedback and customer evidence. For broader operational context, you may also find value in labor market signal analysis, AI talent sourcing patterns, and secure audit-trail design, because the same trust mechanics show up across modern platforms.
Related Reading
- Automation and Care: What Robotic Process Automation Means for Caregiver Jobs — Risks and Upskilling Paths - A useful lens on how automation changes labor, risk, and worker adaptation.
- Quantum Software for a Noisy World: Designing for Shallow Circuits - A systems-thinking article on designing for imperfect environments.
- Building a Safe Health-Triage AI Prototype: What to Log, Block, and Escalate - Great reference for safety logging and escalation design.
- Immersive Tech Competitive Map: A Market Share & Capability Matrix Template - A practical template for governance and capability mapping.
- How to Choose a Secure Document Workflow for Remote Accounting and Finance Teams - Helpful for thinking about access control, retention, and workflow security.
Related Topics
Daniel Mercer
Senior SEO Content Strategist
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you
Cotton & Coding: What Agricultural Shifts Mean for Tech Job Demand
Navigating the Remote Landscape: Insights from the Evolving Job Market
Rallying for Opportunities: How Economic Trends Shape Tech Job Listings
Is Your Resume Ready for the New Tech Economy? Tips from Market Shifts
Sneaker Culture in Tech: How Comfort Meets Style in Your Remote Work Wardrobe
From Our Network
Trending stories across our publication group