Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One thing is certain: AI will significantly concentrate wealth at the top of the…
ytc_Ugz7LR7Ed…
G
I’m scared I don’t want to see one of the fucking AI in real life…
ytc_UgzC5_41s…
G
Great video. The career ladder is no more. I’ve worked in tech recruitment for o…
ytc_Ugx8Su66d…
G
Your not even wrong and the issue is we’re to busy treating Ai as if it’s stupid…
ytr_UgwQGjgxu…
G
The main problem in her case is not even AI, it was her not seeing the clear red…
ytc_UgwQEK8zz…
G
@eastafrica1020 Pretty much all prominent AI scientists were his students and t…
ytr_Ugy0ibI3i…
G
Chatgpt is just a public player. There are propriety AI companies are using alre…
ytc_Ugz_cFH3I…
G
someone here in the comments said that ai bros are just useless "idea guys" , s…
ytc_UgyiK1iUL…
Comment
If AGI were to be fed information on how it was created, it would be able to create the next, better, faster, bigger generation of AGI, which could then be fed information on how it was created, and so-on. We're not at a point where AI can write more than 10 lines of code without screwing it up somewhere yet, so we're safe for now.
youtube
AI Governance
2026-03-18T23:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzefk0ERwhgizAGqWV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzMp826dOOeGp880yp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx_UjX-RltaggbEf5N4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzIjQtMarlbAx3iByh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwzR9MjPdKGcrKP7354AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwpKTeo_IY0iwIeXTF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyNsOK0DZFI5s-cCA94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxAyLbQ7hLtijrrufJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyhvPtJYu6VDPVNdrp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxrFusZ-h8-OIupFuR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]