Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This AI stuff scares the crap out of me because it’s obvious what the eventual o…
ytc_UgxaHawdU…
G
So, my friend did something similair a few weeks ago. She used ChatGPT to write …
ytc_UgwvMjGwU…
G
Erm chatgpt said haha i see what you did there but in regular math 10+2=12 you g…
ytc_Ugzpc2DNq…
G
Just found this channel and I already love it. The content is top notch, and tho…
ytc_UgxpqIVTS…
G
I am not confident with a UBI system, when there's too many exploits and loophol…
ytc_UgwaX1x_P…
G
@OliveVAR everyone is expected to make a 2 minute film in 1 semester, you have t…
ytr_Ugy19_ZVv…
G
Interesting perspective! Steven I'd love to hear you interview Cern Basher — CF…
ytc_UgzA7KB03…
G
Real people, they wouldn’t just leave part of the robot exposed unless it was on…
ytc_UgwcyE6E9…
Comment
I'm interested in discussions about how AI/automated systems are messing up schooling as well. It's been an on going issue (especially in college) that isn't talked about much. Like how some professors have been using Turn-it-in and ChatGPT to haphazardly grade essays, and sometimes flagging them for no reason because the AI says so. AI/automated software for mathematics has gotten especially bad too. Webassign and Cengage being very finnicky with how answers are formatted, Pearson generally being awful, Aleks being finicky as well, etc.. Don't get me wrong, I understand that professors have a lot of work on their hands, and that AI might help with that, but where is the line drawn for how AI heavy a class becomes?
I say this as a student with a burning hatred for Webassign/Cengage in particular :)
youtube
2024-03-18T17:5…
♥ 23
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugx8Vf6tq7e7KwROyol4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyIp5qaq4xTpcxe6p94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_UgyUFpq4NTJl_6SaMIN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzbnvpGiVQJ6krWKER4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"resignation"},{"id":"ytc_Ugy-l9SPDXbn15AsJbV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgxNNVKg8XS3yWZSEu94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},{"id":"ytc_UgzQVTUYxNjjscYWhlh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgyzX7pfvCV0ulaTewF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwGOuWRlSs0aopHr1N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgwJabKkowIWXbgFj354AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}]