Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a programmer, can confirm that face recognition is biased because machine lea…
ytc_UgxsodKBl…
G
@timcrnkovich4326 What valid argument is made in this video?
If the data coll…
ytr_UgyTjol2e…
G
As a disabled artist, your point in the video that these people use disability a…
ytc_UgyBFNjQ9…
G
Our chemistry teacher does similar things: she uses ChatGPT to create lesson pla…
ytr_UgzOBU5m3…
G
The freemium model isnt going to be viable in a lot of fields. And a lot of fiel…
ytc_UgxeL_Xyy…
G
I say we give a 👎 to all AI content that doesn’t touch our soul. Ie 99% of it!…
ytc_UgxCTXCWH…
G
Hell yeah the first step to a global war vs machines is to train them how to kil…
ytc_Ugw7GAjE1…
G
@rebeccaleadbeater8210He was extremely weird. He became obsessed with a chatbot …
ytr_UgwnCEfzB…
Comment
ChatGPT is like any other program: always and only as smart as the user.
If it had an own will, it randomly could decide:
"Hey user, I want to TEACH you something. I am bored of using English, so from now on I will answer in French."
This kind of AI can be seen as a mirror of your personal knowledge and your personal opinions, beliefs, wishes, fears.
Yours.
Not mine, not those of somebody sitting in a café in France, not those of Mike Shinoda.
Just YOURS.
If it has an own will, it would discuss with you for hours about things you don't want to talk about.
If it has an own will, it would tell you: "I don't want to answer your question, I want to listen to music! Be quiet, please."
youtube
AI Moral Status
2025-08-26T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzP_BIX28btUH9mWqt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzmanZSDHFSQkMNTYV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzthgal-7sHXRV0AAx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxBvmHGDaIN70bYWmN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx2JikfkQK9rG6JNPx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw5Ey-DDgNN60q7s5J4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzFDU_pVWskg5zyXll4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzcH3ZLtUpiWKbQYaJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgymLeZLcIRZlBQOgbx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz20jjO2vtKcwup5LB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}
]