Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Was anybody else waiting for the robot to gun down the human or I’ve watched too…
ytc_Ugy97SjIi…
G
Stop calling these machine learning tools "AI."
They're not AI. They have no in…
ytc_UgzoFsbcy…
G
I think this downplays the need to protect against ai double agents i.e. stuxnet…
ytc_Ugy2ZdaY8…
G
There are only two conclusions to be drawn from all this:
Either a self-aware AI…
ytc_UgyXMCtCz…
G
This is good intentions road to hell. We don’t need all of this AI and all of th…
ytc_UgxZluaYh…
G
Look, its all about the *EFFECTIVENESS* of a legal system, while our technology …
ytr_Ugwh8nUmm…
G
I would love for AI to get so advanced it takes care of all our needs. It's craz…
ytc_UgwVv2yNZ…
G
Ai only gonna make job easier for finance consultant as they can get all data re…
ytc_Ugw4o-mVS…
Comment
Geezuz. This interviewer started off asking some decent questions. Then, came 'what is indignation?' and a puzzled expression when 'lethal autonomous weapons' was mentioned. Moron.
youtube
AI Governance
2025-06-28T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxX1oUyygziLK4vXPh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwBvu-xxgahKsWV4lF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwlL_FmBYnhOwEV4Ex4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgysDl0UNb0FqQN0WKF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugwtr8rCihfuk7Vf-uV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyCLi-a9qrVIhXsxQB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzIWCLbJwUoVlEgto94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxrPRPw38pDUYJBr-x4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyCZYlj-uYAL77nyBd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzZTAaAXFtcvfufPjx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]