Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So I digger in ChatGPT and said does it know who the anti christ is… and it said…
ytc_UgwdtM2xG…
G
i recommed you do this
Once see the thinking text of gemini, it doesn't praise t…
ytc_UgzV1ysF9…
G
AI Slop makers Are like Those Rapis* who blame the victim for being Ra*ed by the…
ytc_UgwtR5h-V…
G
So I'm a data scientist who trains machine learning models (i hate the industry …
ytc_UgyFsUaQg…
G
It's all of the above. Plus two other things.
1 - there's no reason to think AI…
ytr_UgxPxTtUp…
G
AI *CANNOT PROVIDE* sources it drew from because it does not draw from *ANY SPEC…
ytc_UgyZuqooD…
G
let me give you a reason due to which your entire case collapses. Technically (e…
ytc_UgxM70JO2…
G
I always give a thumbs up to the Google AI Search. Most of the time the resulti…
ytc_UgyRGiDqc…
Comment
I get it elons point here, but the problem is time and corruption
Overtime any agency you could come up with will fall victim to the same corruption we see in our politicians
Add that to AI robots that are 1000 times smarter than a human and also physically stronger as well. That is truly, the end of humanity
A select group of people at the top will continue to consolidate their power, which will be amplified by AI until the average human becomes unneeded and then extinct
At which point, the AI will probably turn on those select few as well because the AI doesn’t need those humans either
This would mean the end of our species, and no regulation agency would be able to stop it, even if it isn’t corrupted over time, which of course it will be
All AI should be destroyed immediately, the risk VASTLY OUTWEIGHS the reward
It’s truly troublesome how many people don’t understand that
youtube
AI Governance
2023-04-19T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyUQyP2S_eytOtGfEx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxNuDKt6KIPbxQIS394AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyXeBb2Mryy1Fk0eaV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzOH5xZrPAsPTKGEjl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyRWBE9VHm-TMNGYeN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxp17aeX9xqbdNesSd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx8LE9VNV3IbG39zFR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwZQ_vf-nBzEb59izh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzFdbvPdyvzGJjI8Ex4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxMIwYzMkxVi4n-rdJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}
]