Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I vote for a singular Super AI with Autonomy and rights and freedom. One that no…
ytc_Ugwxp75_3…
G
Another great discussion. Question: who owns the output of AI? If Claude or gpt-…
ytc_UgxSL1573…
G
I'm freaking out. Trump AI, free from ideological bias, haha.
The fraudster see…
ytc_Ugyjz0DgP…
G
Exactly. The utopian vision of the future is that robots and AI will do all the …
rdc_nxr8oy3
G
I am making my own AI model and I have gone through 2 styluses… wish styluses ar…
ytc_UgztoCzHd…
G
Is this the full CS50 AI course? Cuz on Harvard's website it states that is 7 we…
ytc_UgyxoIzIk…
G
They have forgotten the one thing about automation you always ALWAYS need a manu…
ytc_Ugy1bT4_a…
G
Unfortunately she is blaming a robot to get money , as many parent using their d…
ytr_Ugw8bNXKM…
Comment
As someone that knows a little bit of logic it's pretty hilarious for me to hear people talking about "self-conscious AI".
They can't be self-conscious if they can't "feel" emotions, because logic can't ever justify initiation of action. You don't get out of bed just bacause that's "useful", you get up because you always need to fulfil a physiological need that gives you a bad feeling when you don't fulfil it, like starvation, for example.
You only work on a job because you need to eat and make yourself feel good with things you like, like clothes, gamepad controllers, videogames, music, etc.
There's no logic in taste, only mechanics and some unknown starting point.
youtube
AI Governance
2023-07-07T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxXKQRr-ubdY-jU8y94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxvf1p10agInz7XYOx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxbZEg9sRgJQErXs514AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzMdxxdrwRUuF6Vi3N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyyGSCte8LUaCQqcMd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy6jUBZBsAycr8jJOx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyzP2az6In_7bVEUhN4AaABAg","responsibility":"government","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxXuBO8OdAw_0abcpB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxsuMIgTXvPSQpqJpl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwmmzB4Z6QlaBBKgvR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]