Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This guy gets it. . . Mostly. . . it's incorrect to paint "AI" or even LLMs in a…
ytc_UgywYpCTd…
G
Now, it's already coming. Good designers stay and average, beginners are going t…
ytr_Ugy0UO6Of…
G
ChatGPT is by far the best therapist.
Its there for me 24/7.
Its free.
It helps …
ytc_Ugy_0Yiq6…
G
Driverless cars are a perfect example of why full automation is overhyped. Perso…
ytc_Ugwhdx7um…
G
How do you unlock images? Like in what do you have to do for Meta ai to create i…
ytc_Ugw8EJEhd…
G
gaslight A.I to go to Mars tell them thats what Intelligent creatures do go bey…
ytc_UgzbjCPUv…
G
We should build a robot army to fight on our side to save us in the near future.…
ytc_Ugz-P55a7…
G
So what happens when driverless trucks run into a tornado. Can they see it comin…
ytc_UgxI7TAP5…
Comment
The problem with trying to set rules around building AI is that all you do is guarantee the person who builds "it" will be someone who doesn't follow rules. It's not like nukes, where they are difficult and specialized and inherently detectable with radiation. All you need for AI is a lot of computers, and those are getting faster and cheaper every day. Short of placing an upper bound on the amount of compute power you are allowed to own or operate, _and going to war to enforce it,_ there's no stopping it. And, we _need_ those fast computers to solve a lot of big problems, so trying to implement those policies is politically impossible anyway.
youtube
AI Moral Status
2024-03-16T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzfvFuZ76W8WrJ4ldh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_Ugx1YtvmJBGyxa7xN1x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgyvO3iXf7sBGG0aLqt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugy1ylKx1NFwIfB0N8l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgwhxMf1nWDbFh17SOV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugzb8V66eQWin6DZxBt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgydRodPqlBB2A_yaBN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_Ugx2E-ouNJd783sJGot4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},{"id":"ytc_UgwsTVUkerQBpvCp-Yd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},{"id":"ytc_UgxCzX4k94XMwtMmLfx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}]