Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Im torn between whether AI will become the antichrist or it will just be an amaz…
ytc_UgxrBrXu9…
G
Covid is never going away. We are going to have to live with it and new variants…
rdc_hm8z50k
G
Cool, in 20 years ai is gonna see this comment and put you on a watchlist…
ytr_UgwJ4nifq…
G
I agree with the sentiment that the current programs aren't even close to being …
rdc_jfarz3w
G
I checked if ChatGPT can copy ur artstyle and it can't!! Soo!! Yippee!!
It told…
ytc_UgyTjFKKr…
G
This has always been the thing with AI.
It is not very good at these jobs **But…
rdc_n5gegbd
G
Man… that hat on that AI art is really infuriating. Guess she should be called “…
ytc_Ugx45YJCb…
G
Why? What difference would it make? How does it relate to police brutality? I do…
rdc_furomw3
Comment
I think people need to understand what an AI model is to assess risks properly. I kinda wish we didn't call it AI. These are just prediction engines based on what it learned from a huge volumes of data. It is not calculating, it is just delivering the word statistically most likely to appear based on the preceding words by having looked at all the preceding words for each of the word in it's vocabulary (a vast amount of data).
A big risk is that these models are not as capable as the vendors would like you to believe, sometimes I think the vendors like it when doomsday scenarios are discussed because it makes their product look more capable than it is.
youtube
AI Governance
2025-10-24T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzuloiXX9NyhPcCerp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxoKATJs_-p_pyisyd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxkMTZpL3o1OVxgbYB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyN8lUbmNWdk2dffs14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwBhtnqoukhPTl8FSd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzzTOouq1je9BWmqSB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx_0e9quQvUALEUqVt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwf5wLWAQ5s-arN28B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgygEzIeTg02bEQwoYt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwRD3gum62tfJxg5lh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}
]