Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have really bad motor control, which makes it hard to do things like type and …
ytc_UgxhAXaC4…
G
I mean this in the most respectful and polite way - My country's govt. doesn't c…
rdc_o80qkvw
G
the self-driving cars leading the current wave are Tesla Model S, an electric ca…
ytr_Ugj7Pi5-k…
G
AI IS TAKING OVER I CANT LIE MOST OF MY AT@T ISSUES WAS SOLVED THROUGH THE APP. …
ytc_UgwrWfzNL…
G
@Aubreykun Counter challenge :
- The tool is not unethical, but people are.
- N…
ytr_UgwzP3dWg…
G
AI is not intelligent and I'm afraid of company leaders who believe in major cos…
ytc_UgyuECibg…
G
AI cannot understand such a thing. It can generate a statement that expresses th…
ytr_UgwjAzmKj…
G
So Fast and the Furious 2 eventually becomes real. Because automated trucks won'…
ytc_UgybFXnN3…
Comment
Don't beet against "AI"?
Actually, what in videos says already exists in many domain.
Correct current Chess AI algorithm method has been surpass superhuman domain, can run efficiently in low-end machine like phone or raspibery. But incorrect naive approach like "Deep Blue" only stuck in level "Kasparov" level intelligence with masive insfrastructure computation and masive data, not superinteligence level.
Current "AI" like LLM today, is like Deep Blue, naive aproach "Fine Tune" with masive supercomputer computation, not even close to superintelligence level.
And in the end, IBM not inventing NEW superintelligence, stuck in the past "Deep Blue" pride.
youtube
AI Responsibility
2025-10-28T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwGrjEBSDff3mJS1Xp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzt-0Ny5geqbG1VNmN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz0PxcoFMikfAWUJEF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyXWh_3ZqyOZ3OA7jl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyZdHFj3tdT5ZzwWd94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxm-0t2jsES6YOq91h4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyxKl4_xMMyy9svw3h4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyhfIkhYlQELOec2914AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgyHtMNaLErqJXzNwmt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyXyGV9CjQ3MLuorqp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}
]