Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Only humans have such a great ego that they see themselves as a threat. If robot…
ytc_Ugx8ZEuH3…
G
Goes hand in hand with the buildout of automated surveillance; they're going to …
ytc_UgxcnvNfP…
G
Ok, they studied together back there. The difference: they made technology while…
ytc_UgyhUjr_N…
G
I'm not denying whose fault it is. I'm making a point thatit's supposed to avoid…
ytc_UgxkjyJRM…
G
AI image generators using other works to make works vs artists using other works…
ytc_UgwfFTNS7…
G
this dumbass cop just refuses to engage with the possibility that the ai softwa…
ytc_UgxKAoGBi…
G
We said this about the car industry we said this about the nuclear industry. Now…
ytc_Ugy17py9x…
G
I know what I’m going to suggest may be radical to some. However, how about we s…
ytc_UgyA3cBx2…
Comment
I don't trust Sam Altman. This video is scary it's sounds like Sam Altman is in a super rush for a new much higher super AI. We only heard about Open Ai and then GPT 3.5 and 4 in the last 6 months and now Sam Altman wants a much more powerful one. Shut it down now. It is getting scary too fast. Even Elon Musk said there should be a moratorium on Ai more advanced than GPT4. Therefore I feel that the US Congress should immediately in 2023 create laws and guidelines and safety protocol in place now before any Ai more powerful than GPT4 is released. Even Sam Altman wants governments involved to make sure that Ai is safe and beneficial for humanity. Maybe Altman want to look like he is protecting but he's sounds like he's in a race as his priority and maybe include a few precaution statement along the way to Ai domination.
youtube
AI Governance
2023-06-16T11:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx6oSdLc44qgWFyuEt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgytxUHdEAs7IRGN7354AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyVA66pZX_wYQSRGU14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyhFkZ0xeeC-Teu7Eh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzNWzWw0L31H2E2oqh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzPhslw8jtd0lY_MNB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzpr2KxXqg_cRpWlUN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyNCskXEcKHiNuHUFp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzD6twY71PK6_QIgQt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzETJ8XmCEyUB2-w654AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]