Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@lukascapek2018 once E.L.A.E.N.A. is merged with an open source ROS enabling i…
ytr_UgwglOfbs…
G
Also, there has been a very recent study that pointed out that traits like narci…
ytc_UgzpWCu0l…
G
yet there's still something about it that makes it look like it's generated by A…
ytr_UgwxsISxx…
G
This is a reminder that AI generated content cannot legally be copyrighted so it…
ytc_UgwZFJl9b…
G
19:15 Im sorry but.....I LOVE THIS IMAGE SO MUCH!!!!!!!!!!!! (AI could never tbh…
ytc_UgymSbTZT…
G
The Korean government just announced a plan to use AI technology in classrooms a…
ytc_UgxLOgVgA…
G
For f sake🤦🏻🤦🏻🤦🏻 we dont need ai to steal our means to make money and pay bills.…
ytc_UgxmYM7qx…
G
You know why? So we can’t tell the different between real and AI. AI is filterin…
ytc_Ugy4so2tR…
Comment
Honestly, I keep seeing these 'AI will never be conscious' debates, and it’s almost like arguing whether a toaster will ever dream. AI’s just a tool that does its job, and that’s the beauty of it. We don’t need it to be 'conscious' for it to be incredibly helpful. So yeah, it’s kind of funny that this keeps coming up like it’s the main question of the century!
youtube
AI Moral Status
2025-08-30T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzWzJjP737zbCD_gYh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwxaQ_aoHdSB09uQZN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxBj6sDWZTqKasxg1F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugwq-gSKt4oB9Am5nd14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugws2AYYzYu-k5UgKBh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzh0d5jRFyBLlSUW294AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwfi0y9nCJCf75bMgN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwFQiFtvYno0j1Gk2d4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzuG-wCRYI5GsmJblZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzqI36W9zK4Q6hl7fl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"}]