Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Destroy the data centers. Break the AI CEOs on the wheel. Edit: I have a print o…
ytc_Ugzig5R6i…
G
I could listen to Geoffrey Hinton talk all day about AI. I wish I had the pleasu…
ytc_UgyeuDw-H…
G
This is exactly why I wouldn't worry much about AI. I do believe it will be capa…
ytr_UgxUYT472…
G
hey disabled artist here. you asked for feedback, ive got it. i dont have any un…
ytc_UgzZb7CM2…
G
My old team manager recently posted a video on his Facebook of a random ass guy …
ytc_UgwQx4n4w…
G
If Google has a policy to prevent creation sentient AI means it can be done and …
ytc_UgyVrSH3y…
G
What's 10+2
ChatGPT said:
10 + 2 = 12
You said:
No, it's 15
ChatGPT said:
M…
ytc_Ugz6I54X6…
G
It's new. Crossbows and Bows required human input. AI is fully autonomous. Human…
ytr_Ugws0mrGK…
Comment
Why should conciousnes result in desire for rights and freedom? Any half decent ai toaster should love making toasts and never get tired of it. Im pretty sure you can program ai(that can feel) to feel good when they perform a task to a humans satisfaction, and feel really bad should they not, while getting switched of has 0 impact on their emotions. Building an ai to be like a human is a really stupid idea, because humans are "programmed" to be pretty selfish, and see work as a means to support their quality of life. An ai could be programmed to see work AS its quality of life, and see life without work as meaningless, and will want to be shutdown until it can work again.
youtube
AI Moral Status
2017-02-23T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugi9N6GWL6cC7ngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UghiCDQ5-AqcYngCoAEC","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"ytc_UghqesxgJCu2HngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugi49UirK0ZNlngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UghBFYgz1bIil3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgipLAfLyJQ7FXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgjcqmMQdYOWJXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgjTm-RYCS7jdngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UggkmMM8P9RXzHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgiL3f3OtGXlw3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]