Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Leaders are using ai as a fall guy. So when horrible things happen leaders can j…
ytc_UgzPLvqfI…
G
As opposed to what.. *Dont* invest in Ai, so it *will* be poorly developed? …
ytr_UgxlYKASO…
G
Automation is to reduce the cost of things. If there is no cost then everything …
ytc_Ugz3STXjL…
G
100%. I was working at a hospital about a year back in a high crime area. They w…
rdc_fvz1keq
G
I need a same lane approaching RADAR with a narrow view, mounted rear facing on …
ytc_UgxtWxTd7…
G
AI is going to be the end of this world.. why are people so stupid…
ytc_UgzvBQ8ks…
G
I wonder if the robot will be sacked destroyed or prison time if court stealing…
ytc_UgyvD79sy…
G
How could they ban the use of AI for script-writing? The AI companies agree expa…
ytr_UgwYWvtQs…
Comment
Hey Alex, I believed I have developes a framework that will get the AI models to give proovably more clear and accurate responses. Especially on very hard topics. I am not in the AI field. I have just been searched for truth. Please reach out if you would like to see the details. I truly believe you will find it fascinating and useful. Thanks for being consistent Alex.
youtube
AI Governance
2026-02-17T22:0…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugy-E5EaFzFBfE82Ja54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxHqz1PUI6iP7zHbx14AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxQ73o03crpef0SqhJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw2ntUYi3LTel4ExtV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwK3Zb0gc0E6cQNclh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy6rR5fSWj-nmunRJZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzpH9APpM-qp3hJ1el4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugweh8thqQaWQVtvJ3B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgysnAWK1nigmleHwPR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxtdWhqRwK6ST11rlR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]