Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The actual dystopian reality will not be humans having no job. It will be the el…
ytc_Ugz8qMjOv…
G
Honestly, this data is fed into the Database, and now AI will train to be better…
ytc_UgwnbIMm-…
G
This is all about media not liking Elon musk and Tesla's BECAUSE THEY DONT GET P…
ytc_UgyHSVOej…
G
@event__horizon yeah in 2026 its already gonna be voted on. I will bet on it. A…
ytr_Ugxkc8jJb…
G
There are certainly risks to AI in the sense that it can enable malicious actors…
ytc_UgwuzAUM6…
G
3:15 yes! And a higher workload is actually more taxing than a lower one. The A…
ytc_UgxC-esFS…
G
Unsympathetic? The cost of living vs the value of the worker is going in opposi…
ytc_Ugxs0FO_W…
G
I totally understand where you're coming from! The movements can be a bit uncann…
ytr_UgwMivAAS…
Comment
I remember doing a masters in 2006 teaching computers to recognise faces, it was fascinating but i remembered thinking this could end very badly for humans.
If AI was tasked with saving the planet, it wouldnt take it long to realise it needed to get rid of humans to save the planet.
youtube
AI Governance
2025-07-06T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxJR9_zycrZmoLaT_l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxtwA9VawSfgI-VRxp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzW5u680mtmkkfTcEh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz8dW_VnoINeu3Hout4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxBq4j-NkJedSN7ppV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx8FqmgA2wKcpcFIN54AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwY-3l1yVLr2Ys1BuV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxNPjOP3kn2-jCjAel4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyZkN17I9V-0Fa8f5d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxV63AtsU0An6tlWWt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]