Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They've been testing universal basic income for a long time now because they kno…
ytc_UgxRJISJa…
G
I didn't say Tesla autopilot is safer today, I don't have that data, nor was it …
ytr_Ugzr6JIuy…
G
Feed AI with knowledge of 1890 and see if it comes up with theory of relativity.…
ytc_Ugxt1xn4a…
G
Nationalise the ai companies and use dividends to fund ubi. Fifty percent of ope…
ytc_Ugy6X--Xc…
G
I get where you're coming from! Engaging with AI can feel a bit strange sometime…
ytr_UgyTcEGDB…
G
The algorithm "trains" it self the same way we do when we see art, and attempts …
ytc_Ugwv8R2dx…
G
This won’t end well. There are so many possibilities, but the rose colored glass…
ytc_UgylAVADf…
G
I literally can't with that guy anymore, he did rather embarrassing himself just…
ytc_UgyklCsac…
Comment
What really worries me the most is what if there’s a hacker and they program your own robot to cause you harm or even take your life? What if they can override the coding? Do we know whose programming these, maybe the Chinese and they can take over our country if these robots all turn against us. Or they get a code that everyone who is above 50, 60 or 70 is not a value anymore and to get rid of us?
youtube
AI Governance
2026-02-19T07:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxG6f60ZlObElHlfIZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyIc232zuEiwu86G2l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx8RIZz4E92pviwkpR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwGJXSZZJjCFC8v01Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgytHqr5HIoRG7i3L3Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyfSQWnZTGDaM5z3J94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzMBmgz_v8tTP2c2RV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwuAxbSkPRNHLhDC3N4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz1JlfLfINJ0LgC3aF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwN-KahCxlWMYhlkd14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]