La Salle St After Hours #2 (take 3)

Yesterday’s post juxtaposed two essays, one written by a person and one by ChatGPT, an AI program. I asked which was which.

When I tried to write this follow-up, YouTube decided I was a robot and wouldn’t let me add links to the post. The post then disappeared. The next take froze and wouldn’t accept my CAPTCHA responses. Now it was sure I was a robot. Then ChatGPT erased the additional essays that I was going to discuss. I finally used two different browsers, one for Chat GPT and YouTube links and another to write the post. So here is #2, take 3.

ChatGPT wrote a convincing essay about a non-existent oil painting. One might say it was brimming with “alternative facts”. John Wright (1934-2017) was a jazz pianist from Chicago. His debut album, “South Side Soul” was released in 1960 with Wendell Roberts on bass and Walter McCants on drums.

When I changed the prompt to call it a “composition” instead of a “piece”, ChatGPT took minutes to begin writing, then spit out the same essay with a different conclusion, using new adjectives in the final paragraph to describe the fictional painting.

I tried again, calling it a “jazz composition”. ChatGPT continued to insist it was a 1991 work and now highlighted the virtuosity of the saxophonist, discussing his “quick runs” and arpeggios. Since the work is for piano, bass, and drums, I wondered where the AI found sax, so I asked. ChatGPT told me there were many recordings of this piece with different saxophonists, so I needed to be more specific. When I gave it specifics, it spit this back:

Thank you for providing additional information. The 1960 recording of “South Side Soul” features John Wright on piano, Frank Derrick on drums, and Frank Walton on trumpet. The saxophonist on this recording is Harold Ousley. Ousley was a talented tenor saxophonist who played with a number of notable jazz musicians over the course of his career, including Lionel Hampton, Count Basie, and Dizzy Gillespie. His soulful, bluesy style of playing is a perfect fit for the “La Salle St After Hours” composition, and his solo on the recording is a highlight of the performance.

As you may note, the only fact on this which is accurate and relevant is that John Wright plays the piano; but I told it that. The others are real musicians but did not play on this record. These actually are “alternative facts”, in that they are factual but have nothing to do with the question asked. It would be like the student who has some knowledge of the general topic but none about the question asked, so fills an essay with stuff that is true but irrelevant. Ousley’s “soulful, bluesy style of playing” is certainly not “a highlight of the performance”.

While I’ve read multiple essays warning about ChatGPT being used by students to do their homework, I would give the bot an ‘F’ on every one of its attempts. If I knew nothing of the work, I may have been convinced. Knowing the work, I’d know the student cheated. Not knowing the work would have necessitated about a minute of Googling to know the essays on painting were lies. Another couple of minutes would have been required to know those which called it jazz were no better.

The other essay, about the pianist playing late into the night after the club had closed, was written by a student from an exam prompt. The students were given two prompts and the piece was played twice in succession. They had to place the piece in a musical, cultural, and historical context in one essay and write a personal reaction in the second essay. It had to be written while the piece was playing and handed in a couple of minutes after the music stopped, so the bot and the human were working in about the same timeframe, kind of like John Henry and the steam drill.

It may not have been fair that I used the student’s second essay and not the first, which may have borne some minor stylistic resemblance to the ChatGPT essay; but I think this illustrates perhaps the most important issue. What is “intelligence”? To me, it is more about the ability to learn than it is about the accumulated knowledge. We don’t call an encyclopedia “intelligent” though it contains a lot of information. At the same time, we don’t belittle a child’s intelligence because they have not yet accumulated a vast store of knowledge. We note a child’s intelligence via the capacity to learn. Learning arises from not knowing. If we don’t know and we are aware of that, we can learn. This AI bot is stupid. Why? Because it makes shit up. If you hide your lack of knowledge by making shit up, you don’t learn. As a programmer, I learned GIGO – Garbage In, Garbage Out. What came out of ChatGPT was Garbage.

Thanks for playing. As your prize for playing our game, here is the title cut from “South Side Soul”.

Author: halffastcyclingclub

We are a group of friends who ride bikes. Some of us are fast, some of us are slow, all of us are half-fast. In 2018, one of us rode coast to coast across the US. It was so much fun, he's doing it again in 2022! If we meet Sal Paradise, we'll let you know.

8 thoughts on “La Salle St After Hours #2 (take 3)”

  1. I thought that the writing style of what I now know to have been AI lacked what I’d describe, literally and metaphorically, as the human quality. The second essay yesterday read like a translation from another language.
    Written English like the spoken version, has an accent. I rest assured that AI is a long way from being able to replicate such subtleties. Though it did just correct a couple of spelling errors whilst I wrote this!

    Liked by 1 person

    1. Sorry for the late response. The WordPress AI thought your comment was spam and I just retrieved it. Among my favorite reading is instructions that were obviously translated and make no sense, especially if it’s to assemble something complicated.

      Like

  2. ChatGPT writes very much the way freshman college students are taught to write. I just chalk it up to the likelihood that the people writing the program teach it how to succeed at the basic five paragraph essay which is most often what it spits out. I also think (this is conjecture) the people programming it might not like to write, might have just gotten through English classes and think that’s a good essay. That’s totally my opinion since I don’t know them.

    I’m also not sure anyone could teach a bot to write really well since it doesn’t actually “read.” It just collects information. That’s something it told me a while back when I was trying to “understand” it as a threat or advantage to education. Any student who uses it should revise the essay the chatbot comes up with, but it’s entirely likely the student would get a passing grade on just what it is.

    Some university admissions officers have written that the bot would probably get accepted to college based on the essay, but not to a really good school. That’s how I see it, too. It seems also to have been trained to believe people. It also has a certain authoritative tone that I don’t like in people, but that’s just programming. It’s definitely NOT “Deep Thought”

    Liked by 2 people

  3. I like to ask the bot about itself, so I just did. I thought this might interest you. One thing I like about it is that it knows its limitations and is candid about them. One thing, it knows it might not get its “tone” right and it has no emotions with which to inform its writing (obviously) but it uses the word “struggle” to explain that. I kind of enjoy its irony. I’ve learned something about language from it. Of course it does not “struggle” with anything but how else would it communicate to a human what it has to say without the word?

    Martha Ann Kennedy
    What are your limitations as a writer?
    As an AI language model, I have a few limitations as a writer. While I can generate a wide variety of texts and answer many types of questions, there are certain things that I am not yet capable of doing. Here are a few limitations:

    I lack creativity: While I can generate unique texts and come up with novel answers to questions, I do not have true creative abilities like a human writer. I can’t create original stories, poems or songs in the way a human writer can.
    I don’t have personal experience: I don’t have personal experiences or emotions like a human writer, which can limit my ability to write from a specific point of view or describe a personal experience with a certain depth.
    I may lack context: As an AI, I may lack context in certain situations or fields. For example, if you ask me to write about a specialized topic that I have not been trained on, my response may not be as informed or nuanced as a human expert in that field.
    I may struggle with tone: While I can recognize the tone of a piece of writing, sometimes I may not be able to hit the perfect tone for a specific audience or situation.
    I can sometimes generate nonsensical or inaccurate responses: Although I strive to produce high-quality content, there may be instances where my responses are nonsensical or contain inaccuracies. This can happen due to errors in my training data or lack of contextual understanding.

    I asked it about its programmers:

    What is the background of the people who “taught” you to write? Are they writers?

    As an AI language model, I was developed by a team of researchers and engineers at OpenAI. These individuals have diverse backgrounds, including computer science, artificial intelligence, mathematics, linguistics, and psychology.

    The team that created me includes some of the top experts in the field of natural language processing and machine learning. They have spent years researching and developing algorithms and models that can help computers understand and generate human language.

    While some members of the team may have experience in writing or literature, the primary focus of their work is on developing sophisticated machine learning models that can learn to generate human-like language. Therefore, my “teachers” are not necessarily writers, but rather experts in the field of artificial intelligence and natural language processing.

    Liked by 1 person

  4. I have been reading that it also has no memory: so that when it writes something, it does not learn from it or learn a style. So it could combine two ideas at our request, but it won’t remember what it has done. Weird. So an AI with no long term memory besides what it starts with. Instantly out of date.

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: