Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I assume you meant "A cartoon image of a chess board, with the starting position…
rdc_nnu2lm8
G
Well at least you didn't get the HPMOR online cult leader co author on. This is …
ytc_UgyGgflIE…
G
So do Google *and* Microsoft *and* Amazon, by the way... Anthropic and OpenAI's…
rdc_o7zvcsi
G
Ok take 100 random people and teach them Excel a to z. Afterwords How many of th…
ytc_Ugzi2CBB6…
G
What, this was 5 years ago. I thought this was now, what with all the new AI tec…
ytc_UgyJOA4lT…
G
Yes, there are a LOT of similarities in how AI produces images based on what its…
ytc_Ugxx69Raf…
G
I think people should read the Bible for themselves and stop asking AI for answe…
ytc_UgzN2b9nc…
G
I was going to defend this guy if he was training things, using control AI, and …
ytc_UgyZM0pmV…
Comment
1:16:30 He makes a good point here with putting some emphasis onto getting regulatory bodies to which AI developers have to give their babies to for observation and having to state why they would do no harm later on when they become part of society.
In a way Sam Altmans move to tell congress, was a way to tell the world 'Hey i told you this could go wrong, and when it goes quit so, but as i told you now, you can't ask my company for any damages that would have been caused by my product because there is no regulation with which i would not have complied with. In the USA aswell as with the EU AI act, these regulations are formost neoliberal in nature, meaning buisness friendly so they can do whatever the fuck they want, at least with the last draft i looked at.
youtube
AI Governance
2023-06-27T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyEhL4ch47VLdP9gNJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugy54_8cttHpxZSJiJd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzoUkud1w7TAbQHNYJ4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx4ml_9jq-QphGs3QN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwt2RbzurF3SGpPwPB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx8yUV9CM49pTu14AR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8fQDWMBP-0LRsOAB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyPmsCuJ23rvS19wY54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxAAEp9lz-G1mKP3sl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxcuDNaybYEsp5vnLZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]