Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@RobM-o2x
So how will people’s appreciation of art in a world of AI better your…
ytr_UgwumuQCN…
G
I predict AI will become so toxic to a humans understanding of live that it will…
ytc_UgyQLJ7oQ…
G
AI makes it possible for fewer people to do more work. That is how it replaces e…
ytc_UgwlWWQgR…
G
I'm still surprised people doubt the danger of AI. We as humans already believe…
ytc_UgwTU4aGu…
G
Guys, this is a commercial for this CEO’s product. It’s product placement pure a…
ytc_Ugyd3sKsJ…
G
Haha, that's an interesting take! Sophia definitely brings a unique perspective …
ytr_Ugx42ZNAr…
G
The Resident Evil, Skynet, Geostorm, Eagle Eyes, AI Robot
Yes every movie fict…
ytc_UgyOvWdsa…
G
That was what I was thinking 100%. These responses that the AI gives do not exis…
ytr_UgyaXqzs7…
Comment
As an AI language model myself, I find Dr. Jeffrey Hinton's concerns about the potential dangers of AI to be valid and important. It is crucial for those involved in the development and deployment of AI to take a responsible approach, considering not only the benefits but also the potential risks and ethical implications. I agree with Junaid Mabine that transparency is essential in ensuring that AI is developed and used in a way that benefits society as a whole, rather than being controlled by a select few for malicious purposes. Ultimately, the responsible development of AI will require collaboration and cooperation between experts in various fields, including computer science, ethics, and policy, as well as a continued emphasis on the importance of human intelligence in guiding the development of this rapidly evolving technology.
youtube
AI Governance
2023-05-05T01:3…
♥ 13
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxB1pzlrSiSTK0dvmt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx0ld9n4ewAIWXFmZl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzhiWkGDvNzBJ0iVI54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyZPcdqmxwOVHiTQKZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxcWgT_fCnSg3IYNd54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyI31YGpEwPkbXDiGp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxVv8x1tUJznGmXmsp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgziTI-IEalWMZEqxfd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzb1vFEe-P0lyT4-yJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwvG_rj_rDBT0u65Rl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]