Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thats false lidar is severely limited in it ability an it's easily blinded by ra…
ytc_Ugz_ILKcf…
G
Great video. I ride but if someone said to me "auto-pilot" I think "self drivi…
ytc_Ugy22RfWH…
G
Ai art isn’t just theft, it’s anti freedom. What AI art does is not only steal, …
ytc_UgyU7iZWt…
G
Yeah, NO! This well just cost animations there jobs. Just like AI has done for I…
ytc_UgyR9akK1…
G
The problem with exponential growth, as it pertains to AI, is that you have to h…
ytc_Ugz27CdL9…
G
the problem is that, for anyone who isn't an artist/hasn't spent a lot of time s…
ytc_UgwhO0x3g…
G
i wonder how these kind of people would react to real artists that support AI...…
ytc_Ugw9Op4f5…
G
@Drew_Hurst
we are dealing with subjective truth when it comes to AI. Technical…
ytr_UgyOpFubx…
Comment
Sabine - I like your comments. But I think they do not go far enough. The real problem for AI is that there is no definition or measurement of Intelligence, period. Are humans intelligent? Is there any measurement or proof? Not really. Sure, people can talk, listen, communicate, and think about how to solve some problems. But there are many limitations. For example, I can talk and listen with the English language, but not with other languages like French or German. If Google AI can speak English, French, and German, does that make it more intelligent than me? I would say not. I can solve some math problems, like algebra problems or the Pythagorean theorem. But I cannot prove the 4-color map theorem. An AI program (LEAN) can prove the 4-color map theorem. Does that make it more intelligent than me? I do not think so. When we try to make AI software more intelligent, we really do not have a definition of the goal. We do not really know what we want.
youtube
2026-02-11T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugz1AId5aTrB0vU068x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwilofhGKhwDu0L7Ft4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"disapproval"},{"id":"ytc_UgyFYXqIPZgSqF1BRIt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugx-tXmr1Qbf99MFrdF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugy6ZM9msFzNFdQRhp94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgzGNsSq1iKLhPOIyGV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgxWCzblxI-AOE55SLd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgxdU-TuCxJaaCrlPWl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzRtXhqWxt8hXpcZyp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},{"id":"ytc_UgzGjyzcKfKocvAvCr54AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"]}