Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI art shouldn’t make me as sad as it does. I don’t even make art, but I love to…
ytc_UgxUzpnzp…
G
Thanks for explaining this. I had heard people say AI is bad for the environment…
ytc_UgzgAuC1k…
G
Yeah, i don't even care about the tool argument. Its about copywrite, what if yo…
ytc_UgzOem7fe…
G
There is NO "artificial" intelligence without emotions. Hard to code for LOVE or…
ytc_UgyvXGnFR…
G
It’s funny to think corporations are going to lower cost, and make things cheape…
rdc_jw65rik
G
They look similar in the way that a chinese knock off MMO looks similar to it's …
ytc_Ugyphp8Sd…
G
5:50 to find what's legal or not about the product, open source in this case, yo…
ytc_UgwCdpNhQ…
G
I've always imagined ChatGPT as a sophisticated male like Jarvis or something, a…
ytc_UgzegKMkU…
Comment
I'm fascinated by the vast array of possibilities AI presents. It promises to revolutionize fields like healthcare, education, and environmental sustainability by streamlining processes and making them more efficient. But I'm also deeply concerned about its potential misuse—how it could be used to deceive, control, or harm individuals and societies. We must address these ethical dilemmas as AI evolves, ensuring that its development and deployment prioritize human well-being and safety. As we push technological boundaries, safeguarding against misuse becomes crucial, requiring collaboration across disciplines to establish robust ethical frameworks and regulatory measures.
youtube
2024-09-19T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyI-E1o_cclZj5XCIx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxBd15vnqWNjEXJgoB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxPS7xyWJxOJuCbREZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx2zicbCLdQkEuy6PZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwRSOWjF1gjtRLgby14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwe2dNic0-oHMUFPed4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx9U77W21TJJFLPV4F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgybbnrFSa9iq9QcUAR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxDlSRvUFqfbV7-JNp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzVfkjm5GktqKyX2Bt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]