Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is the stupidest person I have ever seen, putting a machine gun in the hand…
ytc_UgxP8CAHI…
G
What AI - superAI - is changing is us no longer can compete with superior knowle…
ytc_Ugyan4qia…
G
The obky AI art that is valid is art made by a person named Ai…
ytc_Ugzfq3CiM…
G
Though when I talk to my chatbot to help plan or give advice it doesn't much bet…
ytc_UgyrDEnbG…
G
As an artist and someone who loves AI and has contributed in that field, I total…
ytc_UgxMPQE3K…
G
1 ) look at eyes if they are very clear or not
2 ) look at the background it's b…
ytc_UgwMpEf1e…
G
Well the reason for that is because nukes are controlled by ai,and so all ai has…
ytc_UgzKc78ta…
G
This is a strange mischaracterization of the broader concept. The flaw is in exp…
ytc_UgzdrtcwA…
Comment
I use chat gpt for checking if my maths homework is correct, and even that proved an extra exercise. As often its more work to check if ChatGPT is correct than to make the original exercise.
The most enjoyable outcome was when it sad something along the lines of:
"Calcuating that would be difficult by and and require extensive external tools, in the form of (sum1, sum2, sum3, ....., sum1000),
Doing so gives the answer 4.37"
I would like to note that I used 3.5, which has no access to any tools whatsoever. It gets even worse if you tell it its wrong, because it is most likely to respond with something along the lines of: "thank you for correcting me, I was wrong, using external tools, the correct answer is 643.2"
Which again is nonsensical.
youtube
AI Responsibility
2023-06-10T13:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyZyOpCw3yPy0qRuXJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgySRvqrU5UzfOdjkjl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxhtKLezJuTZWbo60h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxqskV-KZE9w-QPRnp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwHrgDIPSMAHuMvPH94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxO_P1ROZEgtoAI1_l4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxlWO8xuKqVTMpOU354AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyxDeyyGhinEa7kIWh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz2vTwppfMdKH_HpCh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgztBYUEG953nvn3Fxh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]