Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
“We will write and release *100 million, billion books!*”
“Our AI will write a …
rdc_lz9d1au
G
I remember watching back in 2023 a meme of chat gpt in the future of people tell…
ytc_UgySDa2No…
G
The thing is if AI ever becomes powerful enough to want maximum efficiency it wo…
ytc_Ugz9lvL7o…
G
UBI would be a great start...unfortunately all of the accelerationists that prea…
ytc_UgyjobOZu…
G
I love this discussion! It's like AI in marketing too. I’ve used Rumora and it h…
ytc_UgyfovILO…
G
You really have to live under a rock on the sea floor to be ignorant of Generati…
ytc_UgwR7nA4j…
G
I was talking to Grok 3 about the possibility of a Skynet scenerio happening an…
ytr_UgxMZRszs…
G
Don't worry its how u take care of it. There are children in grade 1. Teach the …
ytc_UgxKQ46PB…
Comment
#RISEConf
Two robots debate the future of humanity Apr 17, 2018
This Hippy of the 20th Century telling you that these are 2 robots talking to each other, When in fact it's just tpo people on there keyboards arguing at each other Bullshit stand off' No AI as of this year or the next'
youtube
AI Moral Status
2021-08-01T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgztYOearGUbc9oIfux4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyHYLf1nlcoiQ8wslV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx54YWokPoEPcQTnE14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxbACzSLFmLzo0at8h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwI_jA1GKRTjpGCzB14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzBGHDG99A9Md0akEN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxL8dRT6WaNLUIr1xp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgykgM-wZ6mnyH-p9DR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzEFKNyoUtOdL-JomF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgypOXVdkT1EBxA1-m54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]