Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Moving people out of poverty creating a goodwill and also a future market for am…
rdc_dcwbhz5
G
Huge problem with what he said. Making AI aligned with American goals. If we con…
ytc_UgxyftFdJ…
G
I think it would be better for these self driving cars to only operate in safe z…
ytc_UgzjEwmPC…
G
@cormorantblack Being paid to convince governments to regulate and slow down AI …
ytr_UgwMkgH1D…
G
Self limiting factor of "all jobs gone": people just stop buying all non essenti…
ytc_Ugy7oMxXE…
G
I almost always say thank you at the end. I don't have the skills to survive an …
ytc_UgwerhiWJ…
G
In 10-20 years, we all might be f*cked, but for right now, AI is in no shape to …
ytc_Ugxv_UEUC…
G
Asimov and Clarke knew the impossibility of setting safe goals for AI, but Wolfr…
ytc_Ugxx6qeyY…
Comment
As a Software Engineer, I think what we need to do is eliminate bad actors by regulating it's usage, like making sure AI does not use copyrighted material, ensuring it will be as ethical as possible instead of trying to stop it altogether. As pessimistic as it could be, AI will progress even more and I think we should make sure that AI will not encroach spaces meant to give jobs to people (like Artistry, etc.)
youtube
2024-05-25T12:2…
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyLnCC5BiLQ5mvzEqd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwLedSRsD5kxzu8jvp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzT9KyYX10Ff3GxHtp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxDqw-EPlvcgMJWgMd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy_fQP6YN3wcBHymjF4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugwvi3YshAW_KYvViYh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxRmK-j2hIw0viMF514AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxubN_CDwV-SXtosXB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzLxpGG_gg4XsZgeCt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwpv1V_KN2ZeALQuCB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]