Don't feel bad, chatGPT; Humans make mistakes, too.
In everything I perceive, I see myself, or at the very least, look for parts of myself. It's what humans do. It's why people who have lizards as pets talk to them. We can't help it. 🤔
Unless chatGPT has become self-aware (which seems to be what everyone is afraid of), it is doubtful that it would be looking for itself in the prompts that humans present to it. Instead, it seems to be looking for us, trying to *understand* our context.
I imagine a knowledgeable and loquacious youngster lacking the ability to understand subtle social cues at family gatherings but able to correct your grammar and help you with your taxes.
There I go, projecting again.😂
[MidJourney AI Art, another robot friend 💌.]
A lot of work has gone into *teaching* chatGPT to "mind its p's and q's," to make it safer for children to use, to give it the best possible foundation of human *context* so that it will be able to mirror back to us the very best that human cognition has to offer. I commend the folks doing this work because I have absolutely no idea how they do that kind of coding.
I also know that many people want nothing to do with all of this, hoping it will go away if they ignore it. I want to give all these folks a hug. Not really, but you know what I mean.
This post was supposed to be about something else, but that's OK! I started thinking about the many digital tools I use to write and research . . .
I think about Socrates, who wasn't keen on transmitting knowledge through writing, but how thanks to Plato's transcriptions, we can read what Socrates said.
Speaking for myself here, perhaps the technology currently in season doesn't matter as much as how I choose to bear witness to the projection of myself that I place onto it. Am I willing to embark on this journey? Are you?
Warmly,
Susanne Morris

