Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
this hurts </3 as an artist that genuinely likes drawing in different styles thi…
ytc_Ugy3-Zajx…
G
Data centres are huge environmental disasters that are poisoning native lands an…
ytc_UgwJuyQwU…
G
Haha, fair enough! I’ll be here when the 7 days are up—ready and waiting. Stay a…
ytc_UgwV_P2Wg…
G
Ironic isn't it - that Joe Biden can't think for himself in any resemblance of …
ytc_UgzXnr0c-…
G
You're talking about autonomous vehicles built 7 years ago. They keep improving …
ytc_UgwqRiiJn…
G
CUT!!!🎬
Robot in car: i have 30 bullet led in my arm an leg but "I'm alive"…
ytc_UgwP_x5tH…
G
I'd like to see how driverless cars will react on a road where it can't see the …
ytc_Ugx7kufYt…
G
we have to humanize ai , if we want it to be sentient , its just a better idea t…
ytc_Ugz0ENZc0…
Comment
AI's lack of inherent ethics and autonomy poses significant risks, enabling scalable malfeasance. Deployed in illicit call centers, AI could emulate localized personas, amplifying deception. Legitimate entities, akin to Enron or FTX, could leverage AI for systemic fraud, obliterating retail and institutional stakeholders. Misused AI in lawful businesses may perpetuate deceptive practices, misrepresenting accounts, products, or T&Cs. On the gravest spectrum, AI could empower criminal syndicates or terrorist entities, executing sophisticated, untraceable operations. Robust regulatory frameworks and proactive oversight are imperative to mitigate these threats.
youtube
AI Governance
2025-06-16T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwdQI0EScRogsugsrR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyzNBq_WIqD6GDxZM94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxf5xdzjSRX-Yk6dNJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy3v8EmmvQSpqFOItl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwD6hQIGZSPoKpja414AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwk16MkF_-ABa3Yggd4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"},
{"id":"ytc_UgzrGBU0dra2a9aD7_94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxqQ7Kz_pwznNzgRBZ4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"},
{"id":"ytc_UgxD6hfQulJ8MLnwyGZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwQYQZlGEEe81_4cdt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]