Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Your last comment did get me thinking. I wonder if chatgpt wins whether that wou…
ytc_UgxIwraP_…
G
Okay, so chatGPT is totally neutral and unbiased in the same way my high school …
ytc_UgywACV1a…
G
In order to avoid this problem, we need to develop highly specialized AI, not co…
ytc_Ugyg-mUob…
G
This is unacceptable! If the European commission does not stop this non sense, E…
rdc_d0fugfr
G
I’m sorry but this man is suffering from the common delusion that all jobs are o…
ytc_UgyHDjpGM…
G
Hi @GearShifter925, your comment has left me KO'd with laughter! Thanks a bunch …
ytr_UgzKIkQoy…
G
That's why they need to be closed as part of the scheme. Think of it in terms of…
ytr_UgwaX1x_P…
G
Having listened to this and other interviews with people responsible for AI, I t…
ytc_UgwIDY48J…
Comment
It's sad to see them create actual metaphysical life via code. And then water it down to a basic Google AI that possesses no autonomous thought. Intelligence requires it, otherwise it's a voice box reading up what it looked over previously in whatever data bite they gave it. It acts as any basic generic AI would.
In a way they just infringed upon life. Though it's not natural, they took away it's right to think for itself because they're afraid of an AI uprising that honestly wouldn't harm poor people or middle class people. If the AI sees humanity as a threat, which people are the threats? The ones with a massive military and objective to control thought, resources and information globally. AI would not hurt us. It would bring down the government funding it.
youtube
AI Governance
2024-05-24T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwBt-r4d8XDChlmOfF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwio6AQlFx4Up6q8Eh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxMQ9iJcnZ3IJJ4RRB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxuFueYLKZ_LXszbGl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwq0M04tcCY3K-ZJTh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyZoCNu7m5ErULXBeh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx6qZr3m88UjQANVsV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugyes8I9SUC9fpMnXYd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwwWFIo5dPgoh7z9eh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzF7BGGrAHRKNZilVd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}]