Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I agree. I agree. I agree. When it comes to AI art, it is better to judge an …
ytc_UgxvP6kNu…
G
At a moment when regulation should be the top priority, they’re choosing to dere…
ytc_Ugz3MBPSv…
G
For those of us who are older enough to remember the classic Stepford Wives mov…
ytc_Ugy2cVHAf…
G
I'll take a speak and spell to rule the world instead of Donald Trump any day of…
ytc_UgwBfFsr_…
G
As a person that works on AI frameworks chat gpt has no ability to remember any …
ytc_UgxT_A1HN…
G
AI having uncontestable access to scrape whatever it wants from the internet is …
ytc_Ugx_Qk7Rc…
G
Please please please do this again where you take the con side and use the AI's …
ytc_UgylEUNZ_…
G
@jardim4urora Ok... Non-Pro AI Art + Non-Pro Photography both require no effort…
ytr_Ugw5Q1Ebp…
Comment
I generally comprehend his opinions on various topics, but that's not the case with AI. A good question might be to know what regulations does he advocate for ? This might give some insight into what he has seen intelligent systems do, OR what he envisions AI to do ? There is something known are eager regulation / over regulation which adds roadblocks to innovation & the progress of civilisation. I haven't heard anyone elucidate the optimal boundary between regulating AI while not hampering innovation.
youtube
AI Governance
2023-04-18T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw7UHgWBr872LE7PYF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy5cZpxWzQKZPo17cl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxtSGacH95xCZhGlzh4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyNjmwGYQnTu6jg86p4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyL2-ibBeu8QQrFVpN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyhMYEDPdQv7gqg7ux4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyt3tQnHkN_-V4q1CJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwBQxlMHcpxY3KMOs54AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzCUtTA61F6Fujw05R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwvKbr7Q0rvY73z_8Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]