Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Adolf HItler said tell someone the same thing 1000 times and he will eventually …
ytc_UgycRBA5W…
G
So tired of hearing about “danger concerns” about ai. SOME humans are way more d…
ytc_Ugz8qg2jx…
G
*AI has no true intelligence to it.*
To be pedantic, isn't that what the "artifi…
ytr_UgxeZfjvM…
G
When learning something that (at the moment) is too difficult, we tend to tune o…
ytc_UgwjJC7hJ…
G
AI hype is hilarious - guys, its just a model that is modeling on top of data. N…
ytc_Ugzw28OxN…
G
Jony Ive thinks he's found a new soul mate in Sam. In reality, he's just found a…
ytc_UgwFvC4qx…
G
A major new Economic Social Model is going to be needed and Universal Basic Inco…
ytc_UgzGjHbYy…
G
Fight your robot against a robot made in Japan so your robot will be slaughtered…
ytc_UgxosZL0s…
Comment
Feels like two separate risks are being conflated here.
The first risk is economic upheaval due to mass unemployment. I feel this may come but we will adapt to different jobs and AI will never totally replace human intelligence.
Second the risk of AI becoming conscious and self determining which will not happen.
In summary, humans will never want AI to replace human interaction but it will automate away dead end jobs but it will never control what happens because it is just a computer and not conscious or self determining
youtube
AI Governance
2025-09-04T17:4…
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwGbiwCLq8x9gLzYvl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwBCaI3VQbzqr_phIV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyVnNaUe47uvHhfVx54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugynsl-QScGRUOO3m994AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxo-ZbZdaYJVheSil54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyde5UmkKxrCTtmehF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxHZOnzsAjhPnK8zxB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwwF2yeyzhRofQajgF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugz-_IE6rBU2z5ffUR14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxU8xkgV43LGJ5-mRh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]