Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No attitude, speaks proper English, and can actually solve the problem; yes, I’l…
ytc_UgyymHYj7…
G
The vehicle should come to a stop and since all of the other cars are self-drivi…
ytc_UgifStBUP…
G
Will that guy Nate facing 40 years, will he build an AI platform to serve the 40…
ytc_UgzfXF0-2…
G
What if a victim generates a deep fake and plants it so it's shared and then sue…
rdc_nzfef7g
G
If anything, using AI for art is more like commissioning an artist. Or Hiring so…
ytc_UgwUjbWIJ…
G
We should leave all zoom meetings with AI participants, there is no way they use…
ytc_UgxP68xRk…
G
AI is being weaponised. Weapons are being made to be controlled by AI by all maj…
ytc_UgxGR9Ofq…
G
Absolutely not. A.I. take over will only ever exist in movies. For one reason an…
ytr_UgzHkWTR9…
Comment
What if we just implant a lizard brain esc directive that prevents it from being problematic in the first place? What if it’s powered by a facility with its own independent power and no internet access? There are things we can do to make ai safe we just don’t talk about them enough. And what If we’re just being human-centric and thinking “everything wants to kill us because we’re so special”?
youtube
AI Governance
2024-07-02T10:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgybClLuoAI4Dj241dR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxR--ghM1hBvNBzSzJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx5GuCyK6rZcrRIXUJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyeEXZ7HF1megIaGFx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzWuHc7ddca96AxB-V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgzX8AofQpnBvIeeVPN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx5sPaafb2vTVe_5dF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzXm4X7ZzIxYFAK8OR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugym_-2buvGuCQqDHc54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugx3_3OaZfrT9uRNydB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]