Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Brilliant conversation on the subject of AI. However when it went in the directi…
ytc_UgyXZKWhb…
G
I mean it would be way better, but that will never happen, sadly. The amount of …
ytr_UgztasokG…
G
Nah this is bullshit. Ai doesnt deliver like they promise. It fucks up all on it…
rdc_ofhilcl
G
Eventually, many people could work by using VR/RC to teleoperate a robot from t…
ytr_Ugyzvs-0A…
G
I'm so late, but art has ONE or TWO thing that ai doesn't, it's MEANINGS, If you…
ytc_UgwdWK0vF…
G
It's just a computer server running a smart word prediction algorithm.
I'm will…
rdc_mdk55do
G
All Ai haters in the comments I see, I still just don't understand the hate, I s…
ytc_UgwEGDWzc…
G
He's overweight but I agree, the Jews are 100% hiding real AI and not letting pe…
ytc_Ugzvd_GTp…
Comment
I am a science fiction author. Maybe this makes my view on this more open? I don‘t know. I do not see the imminent danger of AI being able to kill all life or at least humanity. At least not by a direct attack with that intent. I rather see humanity dying off, making ground for our next evolutionary step: being artificial. As soon as we can create sentient artificial beings, we create a nearly immortal version of ourselves. There is no us versus AI but AI simply replacing our less capable version. We are building artificial humans and soon will see that being artificial has advantages and our view on AI will shift. From a possible danger to an attractive partner. And when humans will eventually select an artificial partner over a human one, evolution will take care of the rest. We slowly become extinct as meatbags but will live on as AI people.and I do not see anything bad with that and humanity will then become able to colonise the galaxy, spread through the cosmos, become unbound of inhabitable planets and live for eternities.
youtube
AI Governance
2025-08-26T15:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxXhl6zwFWdzgjVC5t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyHaFFXIDEqzZVhR9R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyU3Lhv2obRVScJ5TR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwnXVPazHGaT2x-94R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgygUKG5-ctjBhVKa694AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]