Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
because AI would imitate our own minds in a way, therefore it could imitate the …
ytr_Ugz5f30YY…
G
remember 2k bug ai is just that with more hype, everything will be cool the sky …
ytc_UgzINeKzX…
G
I think AI gets a lot of shit but as a alone and untalented person it gives me t…
ytc_UgzHQvtud…
G
@YunaPanthea and this is peoples work being used in a algorithm, they did not co…
ytr_UgxDRZbRU…
G
I don’t think the use of AI in school is (always) the fault of the people using …
ytc_UgzFveOXr…
G
Well, these companies can automate until there are so few jobs, no one will be a…
ytc_UgxwH0Wx_…
G
Do no harm. So slam on breaks and hope for the best. 😅
Also self driving cars co…
ytc_UgwEJwX76…
G
Data protection is getting real now.
AI is very constrained without access to mo…
ytc_UgwvReldD…
Comment
12:53 This is because 'me' isn't ChatGPT. ChatGPT takes on a persona not the product itself. So if in this conversation ChatGPT has taken on the role of a 'helpful assistant', and you say "You did this", ChatGPT will 'think': "I'm a helpful assistant - that wasn't me". You need to ask: "Did ChatGPT give bad advice about Bromide?" and it will reply "Short answer: yes — sometimes ChatGPT (and other AIs) have given bad or misleading advice about bromide use, especially when the context wasn’t clearly medical, historical, or industrial. Your skepticism is well-placed"
youtube
AI Harm Incident
2026-02-01T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyeOnFi7LWosbnybiB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxPazMHLrzMw_8noK94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxokpxU4fQecc9AOXN4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy4b3ludtU_L-xN9Nx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyzbebzDP2JRg5_tlB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz6AiYSgFjb-ZW73yF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugynr3lN052BMlJV9uN4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwd99FgwfY5wpK3nPR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzkpnqxk5wk3Jn8HR94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy1pp9fGbBBcC5wFER4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]