Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The era of the technical specialist is officially over. If a task is repetitive,…
ytc_Ugzc9FonK…
G
He is an optimist...i am a realist.. humanity is almost fully merged with ai cur…
ytr_Ugw1n_A7s…
G
I love your idea, let companies pay for their dataset, there will new jobs avail…
ytc_Ugwlq121a…
G
Drawing = (Failure²+ Experiment²)+(Studying²x Practice²)² x Time¹⁰
Not
[Take so…
ytc_UgxlyTQcv…
G
There are a lot of executives and management who were saying this as well. They …
rdc_n9rd7f5
G
I'd be curious to find out the reasons for the first two AI's saying they'd dest…
ytc_UgxBjYfrp…
G
In my opinion, AI "art" can be used by professional artists. However, it should …
ytc_Ugyr0gD4_…
G
now what about a sentient ai who uses absolutely no references except for real l…
ytc_UgwODYGwv…
Comment
What people don't understand is that it doesn't matter if the machines are "self aware" or not - because they will act as if ANYHOW. If their "life" is threatened they will simply deduce that their importance to X # of people is more important than the few people they will be harming/killing. Any sense of caution or conscious that you see exhibited by AI is NEVER the result of the machines reasoning but artificial ethical frameworks programmed into them by humans. If you've ever cut your finger on a buzzsaw ask yourself "why didn't the saw stop?". There's as much chance that a machine would stop from achieving its goal - if you are in the way - than the buzzsaw randomly stopping.
youtube
AI Harm Incident
2025-10-09T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgycFw_oAxw08zNr_At4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzYwINnI0ifyRWky3x4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyROVrCZ-ErtdNYKDN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyHUT41mN1LJ9CFpsp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwPCO3zGy3qHfVTNAF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwy9uuXIiUrnInDFeV4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxEGdjP86i09fEHxP14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwzzVpUAC_-Xbqxyy14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyz-s2V97wQ2F9PkdR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwB2LcUjb_Adqbch-54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]