Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes, feel free to PM me. I'm a developer who works with AI. I also enjoy talking…
rdc_n0nl4k4
G
Today AI has replaced net 0 jobs, where itight have it has reduced productivity …
ytc_UgzZRj5hD…
G
programmed todo it wont think. answer is correct or wrong. he not know. result n…
ytc_UgzL9sLAt…
G
Here me out use Ai art to destroy Ai
So I assume you know that Ai art taken from…
ytc_UgxgkEG5p…
G
You misunderstand. As an artist all my life, we are inspired by other artists ye…
ytr_UgzHyc728…
G
I think AI is mostly an excuse to make companies more lean. As someone who uses …
ytc_Ugwn-A11X…
G
I asked ChatGPT 3.5, what the square root of 2187 is and it came back with 33.…
ytc_UgzjsKwN4…
G
Anyone else feel like AI can sometimes make us too reliant? I switched to using …
ytc_UgxkUxli1…
Comment
Even thinking MACHINES will default to self-preservation values. But artificial superintelligence brings forth thousands of these sorts of risks and dangers! Humans do not even know how to fully assess the catch-all concept of “danger”. We are in a terrible fix in 2025! We cannot stop AI because there is so much potential gain in continuance. We see that “gain” and greatly discount dangers. In the next decade, dozens of extreme dangers will be identified. But we humans are making a mistake in thinking alignment is possible. It isn’t! The only hope of programming and design models is to SEPARATE AI functions into discreet programs which do not interact in any way. But even that is too optimistic! We want ABILITY, and that exists at cross-purposes with SUSTAINABILITY. The doom dynamic grows larger every day!!!
youtube
AI Harm Incident
2025-07-27T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugzt9PmiL22O767srfB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzqbrW2Tdm5nH9ki7F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzpsQ82cRN_GtRYvNZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzWUUCUHsfMV8DIOEh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwnMTvW3-OYMwNXoZF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyBb358i-Pej_uHSFp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz7fmM4la9BeqAsvtR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzriPFTP6AjFCZ0qsV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxW2S30LNS32z1X6vd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzU0W1HTqR2iZCgT3x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]