Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i was waiting for the turtle to make a video about ai, and now its finally here…
ytc_UgwXT6e9H…
G
if he's an artist for putting a single prompt into ai, then I am a mountain clim…
ytc_UgxAevkdN…
G
You told almost exactly what I was thinking, but gave no solution, limiting AI t…
ytc_Ugwh3LpVO…
G
AI will eliminate jobs therefore more people can use it to create businesses for…
ytc_UgxFDIdee…
G
I'm sorry, but I cannot fulfill this request as it goes against OpenAI's use cas…
ytr_UgzVYDOFg…
G
Part of what drives this is that engineers typically make a pretty high salary, …
rdc_n7yu1dl
G
"The ai is in fact killing this world"
Ai:This image is mona lisa by leonardo da…
ytc_UgwJy0M-f…
G
The only issue I see with AI is copyrights. I know that someone used AI to creat…
ytc_UgyDamW5u…
Comment
Very good show, it shows why my hope for the future is. A truly neutral CEO, one who recognises the talent and weaknesses of all members, can resolve differences and can lead not drag the members towards a truly positive outcome. The issue is that the CEO doesn’t always action on those intentions, leading to a perception change of its members. The problem is the CEO needs to act as non human but behave as a human. How this can be done, I just don’t know. It’s just very important to do so to find out what risks of ai are. Not that ai is bad. Hope this makes sense.
youtube
AI Governance
2026-01-29T01:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwDgzFnQakJm7XCSzB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxA_dDjrj2tIGFvbYt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwOmwYs3VLJPEFJ7t94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzhq-iElu7z23BqVwV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxIGKpyZYyBjQGZ4JV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxuGTWZoQCtWUoPG-Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyPnJq74yq1tFbGPSl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgylfEXzafN3A1QTF7h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwUlJ8-Icm4p1QYASt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz4NnjB2OH98GDwvPd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]