Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
1:15:14 the thing is that innovation usually come from outliers. Regardless if A…
ytc_UgzLf2ugK…
G
I'm 69 years old and I treat chatGPT always with respect and politely. Its that …
ytc_Ugz3BhPI3…
G
I mean using Ai to fix grammer is okay, I myself is a very young guy who like to…
ytc_UgwM61y-j…
G
*We should not use AI as our physical mental or social need* .
If we, results ma…
ytc_Ugze8HSTP…
G
They make robot to increase their business productivity, it's not for people ben…
ytc_Ugyh0xDKw…
G
BEST case is AI continues to approach technical perfection without a moral/physi…
ytc_Ugw9ePl7b…
G
This Bern owns three houses
Gets paid to speak
Tells us we need to keep his h…
ytc_Ugyjkx9Xp…
G
AI logical conclusion could be that humanity is a threat to Earth. Is there a g…
ytc_Ugy348a0-…
Comment
Well lots of political correct answers and responses here, felt quite 'scripted'. That being said, a needed discussion and the idea for an international body to govern AI development and deployment is not a bad idea. Though US as lead? I don't know about that, feels like this hearing was rushed in because Europe released the AI pact recently and was working on it for a very long time already. Let the US figure out first how they want to regulate data & privacy ruling and make that a national ruling, not by state. Also regulation on current AI is a bit too late already, don't think IBM, OpenAI etc. are going to pause AI development. The time is now to start creating regulation around AGI, this will truly be the disrupter to life as we know it. Let's wait and see how fast the USA can move on this topic.
youtube
AI Governance
2023-05-17T12:2…
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzzLlUBe0v-1wKxrIJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyBhsm5scgAtULZegZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzoJtHLDlD6XDKOlvd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxve2CUI8lJRSfuoBB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgySBB0AlKEj_7V0d2N4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwyj4c_sPcXOZ7p0jJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"mixed"},
{"id":"ytc_UgzNSALVSWzHxPSrcLN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyjIjorUZrqN1yuu454AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyKpdblaroeKfhkxxx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"approval"},
{"id":"ytc_UgwY5AK3SjS_-RibElh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}
]