Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
if there will be one day only one spark of a new idea/conviction=>resolution in …
ytc_UgyVJaTZH…
G
Not any Robocops now. 🤔 But drones will all but eliminate law enforcement!! look…
ytr_UgzwQOSoO…
G
A lot of these comments are crazy. The parents do have fault but there is still …
ytc_UgydQtmTR…
G
They just wont stop bro its like thes ppl wake up to be a step closer to a domin…
ytc_UgyC6nOxc…
G
two months later and lo and behold disney and open ai decided to pirate disney s…
ytc_UgzWOMn_g…
G
I feel you bro, very recently I switched to a stem career as a last moment decis…
ytr_UgyDoHMo8…
G
How to develop benevolent AI? Developing benevolent AI at all levels is the key …
ytc_Ugxkc6gKW…
G
Live your best life possible, treat everyone with compassion and love 💕 I will c…
ytc_UgxDvWWpZ…
Comment
@tosemusername My main intention was to point out the danger of category error. A common problem where people anthropomorphise “AI” and confuse themselves and others in terms of what it refers to. My personal observation is that most people who have concerns are in some way connecting it to sentience, agency, or consciousness. I know the technical terms often use AI to describe systems that use data and learn patterns from it to perform tasks. The tasks however are associated with human intellect and are not evidence of strictly human aspects like lived experience or independent will. This is commonly used in systems across many different industries and homes.
Now in terms of bias – I don’t think it’s always the best idea to use it in the context of LLMs as a purely negative term. Because training is in a literal statistical sense the introduction of some kind of “bias”. Training means embedding patterns. Embedding patterns means embedding bias. Decided to make a model helpful rather than sarcastic? – bias. Chose English as primary in training data? – bias. If we say safety tuning “injects bias”- technically it is correct. But it would be like saying that steering a car is “injecting directional bias” ... it's the purpose of steering.
We could of course try to determine who gets to then decide which “bias” is the correct one and to what extent it forces the model to act in line with the requirements. But due to things like technological progress speed, competitive markets where AI is already becoming one of the dominant technological focuses, and the complexity of the user – human complexity that includes intellect, experience, character, personality etc. It seems almost impossible to make that happen. Or at least impossible in a practical sense.
So, we face two most obvious ways to proceed (realistically speaking). Stop using LLMs completely (good luck with convincing everyone!). Or accept that no training/bias is going to be perfectly safe for everyone. While we make continuous attempts to use feedback and updates and correct the errors, we must also accept that mistakes and some consequences will occur.
Personally, I would note a part in this that relies on “us”- people. We all live here and must be able to make decisions and sometimes sadly deal with the consequences. All part of the package deal: "Immersive experience: human life in society"
youtube
AI Responsibility
2026-02-17T01:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgxtQSVqwGCW6aNizgl4AaABAg.9pE1jPyQpB79tvmX_yqgoQ","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugz0dYKsSUzt6GbzJrp4AaABAg.AVxfjifZz2yAVxuN7aNjv2","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwueX5TTao_NqmQTh14AaABAg.AT4esLCMoeBAT6F8v1Sj1V","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytr_Ugw_aYBhWsNdM1PRt8x4AaABAg.AT4GhZ9pCv5AT5SDZx23f8","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgysErl8b8XQolbcyRV4AaABAg.AT2tZPhVsC5AT4gFy4hpwm","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgysErl8b8XQolbcyRV4AaABAg.AT2tZPhVsC5ATJIsYgBxBE","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugx9sYJgBFdZvZ1wVRZ4AaABAg.9VdgrCLXliQ9VezhuMIEqK","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_Ugw0vrJipj8iZjf0JTZ4AaABAg.9VciHkxK_Fl9Vm_3kny_xG","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytr_UgwDrP_0u555N0nbbuh4AaABAg.9Vc8Tj9WbDi9VrTDIJsgzf","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgxuH1s_KeJXQA5GDtB4AaABAg.9Vc5G_9ke129VcVNkweDLl","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]