Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If AI takes all the job, where would the humans get money to also buys what the …
ytc_UgwVic48b…
G
Full-scale nuclear warfare It is more dangerous. Then artificial intelligence. F…
ytc_Ugw_40BYo…
G
The United States have been stealing and selling people's copyrights and patents…
ytc_Ugy0SzfmM…
G
The algorithm of this program sucks pork? And of course people of melanin, not c…
ytc_Ugy-H3dDd…
G
This video to me relates directly to the MBTI and proves that we cannot predict …
ytc_UgycOfrfx…
G
I hate how people use ai and then call themselves artists. That's just straight …
ytc_UgycI9OsU…
G
A robot will never be a friend
You will kill your own people
With this
War will…
ytc_UgybHEaEm…
G
Can you please interview Australian Biologist, Jeremy Griffith? He may give hum…
ytc_UgxciOW_9…
Comment
AI is an inevitability, so it's better that AI develops and is released progressively in public leading first as closed source to get society used to the ramifications and to give humans time to adapt and time for the government to regulate and mitigate the ramifications of pervasive AI.
And for the record, I'm pro-open-source, but when AI tools are open source, then there are no safeguards in place for how it is used. So the rate of progress for closed source AI should at least exceed the rate of progress for open source AI models so that "AI literacy" can spread around the world so that people are more prepared for when there are no restraints on the AI.
youtube
2024-05-16T19:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyiOM3SZ_5p0gRaWUV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzUmxvQ17p8o2uJSMZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwGtJDDLXtgQPRqloZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyT7jrUo409g04jzpx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwRAj1Bh7LkXOL4HXB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyMKHZGxn8k2LI47Yd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzu519XUaVjsnIkwBZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgyD3g2tOz8-BWmK81t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwqYLA5C32NFfw9jER4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxO2ZlFk368oh0Pgep4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]