Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yup, though I think even "prompt engineer" is over the top pretentious, it impli…
ytr_UgzdbjHc_…
G
Art only requires a few things, if your able bodied, 20 bucks and some effort, a…
ytc_Ugypp3Zpv…
G
Drake Santiago what r u on who cares if groups unite, if ai is more efficient th…
ytr_UgwN7nfTs…
G
automated battle drones LMAO there are no automated battle drones. Most "drones"…
ytr_Ugi_4HU5J…
G
I genuinely want to have a discussion on this, because I’m trying to understand …
ytc_UgzwbwJ8p…
G
My classmate does that too. I asked if she's a robot and she said no. Huh…
ytc_Ugy6CNEIC…
G
https://preview.redd.it/z4kfevq0r5xg1.jpeg?width=1402&format=pjpg&auto=w…
rdc_oi14z6p
G
I don't think humans are wise n ore moral enough to properly create safe ai.…
ytc_UgwO8t889…
Comment
Open AI's Sam Altman talk son ChatGPT, AI Agents and Super intelligence provided us several perspectives in today's real world. It also raised several profound issues on ethical, regulatory and legal dimensions within the innovation business and open education world. Prof Madya Dr Jeong Chun Phuoc
youtube
2025-07-28T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxmvKxIBiv3l5KARsJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzTeyyok9c9hhA5VzR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyXczWVzVDYFg134Rt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzsh9BC7jiAo32sAn94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyAKUq6PLrHCsGd52B4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwS7gGAr-EnQWAxeZN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx5UBnZRHiO2ALPfRl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyDjXm1ayue0pFAmQ14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzTAW43RVop5KAks_B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwu1p81KpwoaOlRuTJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"regulate","emotion":"indifference"}
]