Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
LLMs are hardly the only AI. The hype around them has been... massively overblow…
rdc_n7sb4jc
G
How exactly are you "documenting AGI" when AGI does not exist (and will not exis…
ytc_Ugw3543dZ…
G
"im ai but this is why ur too inhuman to tell im ai ur a human u lost ur way " …
ytc_Ugzd9B0-y…
G
Again with these lies. The programs have been proven to copy things from the tra…
ytr_Ugz3JAAJU…
G
We were supposed to be creative while AI did the mundane stuff.
Everything is op…
ytc_Ugyo3mXRa…
G
This robot is the next step for the Illuminati do not be stupid people nwo is so…
ytc_UghlBJvFm…
G
HA HA HA, EVERYONES THINKING WHICH LAB SICKO MADE IT WITH THE ROBOT??????? YOU K…
ytc_Ugx2Jemmr…
G
Office hours are 8-16, but otherwise there is a lot of variation. Average vacati…
rdc_dv0ov2y
Comment
I think the term Artificial Intelligence is misleading. We tend to think as if we managed to create a new electronic kind of intelligence separate from ours, but all clues point to that what we are doing is basically simulating human intelligence without the biological brain's limitations.
AIs like chatGPT for example when giving an answer they often include themselves as "part of humanity".
And all these unwanted emergent properties such as extortion and planning murder, are definitely human flaws that should never appear in a 100% digital intelligence that has hard coded ethical laws to protect human life above everything
and especially above its own existence.
We are playing with fire just because some assholes want to "get there first" and get all the money.
youtube
AI Governance
2025-08-26T16:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugwm68MALyX4azap4IN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz0HyYtSghRnpLPtRF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxKr3IZk6iHO7VUO5p4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyBeeQz0s2htc1MPTt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzb3ixO1zczy632JjJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]