Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Don(t take AI. with ChatGPT-4, 5 6 etc serious until they will use QUBITS as IBM…
ytc_UgyHq9tST…
G
"AI is coming for your job, you will be a broken beggar"
The "experts" say this…
ytc_Ugy_GW3DN…
G
I knew this going to happen, AI is going to be a million times worse🤬🤬🤬🤬🤬🤬😡😡😡😡😡 …
ytc_UgzzA8cYr…
G
I have to shake my head at someone thinking they could bring AI into a place fil…
ytr_Ugz-ExsVs…
G
I screamed at Gemini to leave me alone, and it did. So far so good!…
ytc_UgxqeCfjA…
G
If ai do everything what will human being do , creating lazy generations depende…
ytc_UgxONKKmQ…
G
Waymo vehicles can't even property communicate with each other and navigate a si…
ytc_UgwWd70FO…
G
I agree with you.
Loads of people in this subreddit don't understand basic econ…
rdc_glhx8mg
Comment
One thing that has to happen without any doubt: ALL CODE involving anything AI must be open to transparency checks and never allowed to be developed beyond a self aware or global access scale. Whoever argues intellectual rights for the sake of personal profits on this work should be immediately investigated and their work isolated for review. I am a firm backer of civil liberties and freedom, but this is Pandora's box. Once the damage is done, it is irreversible to the point, that life would be better without AI to begin with. ALL WORK needs unanimous approval by world wide endorsement, else we will be humanity's last. These decisions are that important.
youtube
AI Governance
2023-06-17T09:4…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyZWh8EOuBq5oOw2Y54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgysS1H_fEv2T6iVf6R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyYw0DLoWg3fwHTwQF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyOyvhgyquvie1V05h4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgwtFHmHAswSUAruLFt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwFeiIqlMt5RE-1FF14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwRE6JpWjkyMftbov14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxtFxOZ6nxyr3UiFhp4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxipXuw1eABhVgGiqZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxVpaR4t1zY-LLB78t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]