Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They probably have already created a sentient AI and they want to release it to …
ytc_UgwxVnwSN…
G
Russian frontiersmen of the kind that only exists for the American today in roma…
rdc_d2xwkea
G
Giving an automatic gun to a robot is like giving an automatic gun to a child…
ytc_UgzxwiwsV…
G
Totally fake, but looks cool. The concept is scary though. Let's not go any furt…
ytc_UgxGub7I8…
G
AI ultimately thrives at convincing non-experts in a given field that it is an e…
rdc_n7jwsfu
G
I wonder if ai is limited by the amount of terrabytes it has to be dangerous or …
ytc_UgwV3EsGA…
G
same thing was said with the industrial revolution and the invention of motorcar…
ytc_UgzYF0S4I…
G
On the contrary, I think that the countries that ban these AI's will be done. T…
ytc_UgyB89lFt…
Comment
AI is not going to destroy humanity in 10, 20, 30, or even 50 years. The same way nuclear weapons didn't/have not destroyed the world. Human have the back doors for these technologies and will always leave a loophole just in case things go wrong. Governments are just creating imaginary enemy to justify wasting taxpayers' money. The dame way nothing went wrong with Nuke, nothing will go wrong with AI
youtube
AI Governance
2025-08-03T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugzf1oo2AZYhUU6bE9N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwDeGlpP7oI5UrMOoB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyTQQw9HrWM2v56yVN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy-betGn52GTR1Uz114AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyoqY5GYeahSMoOHkF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzCimXgix5TmhdkaTR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxpkNBLZeWATL18WO14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz2l9bUdT2eY3aK0Lp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzb5GoawFPb857_ZZF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy7VwUzDeTAPZ643Rd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"})