Multiple forms of artificial intelligence (AI) are widely used in the marketing industry. These guidelines address the use of predictive language models, including ChatGPT and other generative natural language processing models.

Predictive language models are used to create new text similar to the text used to train them. A common example is the autofill that comes up when creating a text message on an iPhone. ChatGPT stands out because it employs deep learning, an advanced form of machine learning that focuses on developing algorithms and models inspired by the human brain’s neural networks. In essence, deep learning allows ChatGPT to write text that sounds more like a real person.

Generative AI creates new content based on the model’s prior learning (training) and in response to a human prompt. It can:

  • Transform: Here is a revision of your text written at a lower grade level.
  • Generate: Here is an outline of a presentation someone might give in response to your prompt.
  • Predict: Here is how a teenager would finish that sentence.

ChatGPT and other forms of generative AI are tools that can help marketers and communicators in their work, but they should never be considered a replacement for human workers. Even the most advanced forms of AI cannot fully replicate the power of human thought. Common shortcomings include:

  • Inability to anticipate needs. (Prompt writers must provide accurate and complete directions.)
  • A tendency to fabricate facts to satisfy a prompt.
  • Failure to understand context.
  • Word choices and tone that are inappropriate for a particular audience or brand.

Use of AI tools should not be considered plagiarism. Plagiarism is defined as presenting another human being’s work or ideas as one’s own. An AI tool is not a human being.

These guidelines are meant to guide informed and ethical use of ChatGPT and other generative natural language processing models in marketing and communications at the University of Wisconsin-Milwaukee. These guidelines are for employees only. They should not be misconstrued as academic policy or guidelines for instructors and students.

Guiding Principles

The University of Wisconsin-Milwaukee and its Division of Marketing, Communications & University Relations (MarComm) values transparency in its communications. AI-generated text should never be presented as the work of a human being. More specifically:

  • AI sources should not be used to produce bylined content for UWM Report or other university webpages. Bylined text should be fully the work of the person whose name is on it.
  • When AI is used to generate text, the person using the AI tool should inform their supervisor and others involved in the project. It should be clear how the tool was used (e.g. to generate ideas, draft text, revise, translate into another language, etc.).
    • Please note that AI content detectors are not completely reliable and may flag human-produced content as AI-generated, and vice versa.
  • Appropriate steps should be taken to ensure accuracy in all AI-generated content, including independently verifying all facts.

Acceptable Use

Acceptable uses of AI include brainstorming, translation, and content generation, organization and revision. The section below provides details on and examples of these uses.

Brainstorming. One of the strengths of AI is that it can produce variations and iterations quickly. This feature can be used to facilitate brainstorming. For example, a user may ask the AI tool to produce 20 options for social media headlines about the opening of a new campus building.

AI tools also can be helpful in the planning stages of marketing. For example, a user could ask the tool to generate an outline for a social media campaign celebrating graduation. The user could then ask for several other versions of the outline, and then pull the best suggestions from each outline to create a final plan.

Be aware that AI tools cannot generate new ideas, only repurpose and reframe what has come before. This inherently limits the level of creativity in the tools’ content generation.

Content generation. ChatGPT and other AI tools can be used to create new content for webpages, emails and a wide range of other uses. In general, the more specific and complete prompts that users provide, the better the quality of generated content.

A wide range of free training materials are available for those using AI tools. It is recommended that people using ChatGPT or similar tools to generate content complete training on how to write prompts.

Content organization. ChatGPT and other AI tools can organize notes into meeting minutes and memos, saving time on administrative tasks.

Content revision. ChatGPT excels at revising content for a specific reading level, to be a certain length or to fit an alternate format. For example, a user could provide text from a news release and ask ChatGPT to write a Facebook post. A lengthy and dense administrative memo could become a 200-word web post written for a general audience at a 10th-grade reading level.

Translation. Some AI tools, including ChatGPT, work in multiple languages. They can be used to translate English text into Spanish and other languages. When this is done, the new text should always be reviewed by a native speaker.

It is important to note that AI tools’ facility varies with different languages. For example, as of September 2023, ChatGPT was capable of generating text in at least 30 languages. However, it provided a cautionary note with its list of languages, saying “proficiency may vary depending on the complexity of the conversation and the specific language.”

Prohibited Use

AI should never be used as a source of primary information. Primary sources of information are original documents and first-hand accounts from people with direct knowledge of the event, issue or subject. Primary sources are those that are closest to the thing that is being addressed. For example, the vice chancellor for Finance and Administrative Affairs could be a primary source about the UWM budget because they were involved in its creation and approval. The university’s architect would be a primary source for information about a building project.

AI tools cannot interview people or see objects. Because ChatGPT and similar tools are predictive, they fabricate facts to fill in gaps. For example, ChatGPT will make up quotes to fill gaps in press releases and fabricate square footage to produce a description of a building. Any fact in AI-generated text must be independently confirmed with a primary source or a reliable secondary source, such as the UWM Facts Database.


UWM is legally and ethically obligated to protect individual and institutional data. Entering data into ChatGPT and other generative AI tools is like posting that information on a public website. The tools store the data so it can be used as part of their learning process. Once stored, the data can be accessed by AI developers and potentially other users.

UWM employees may enter only publicly available information that is classified as low risk into an AI tool. They should never enter internal, sensitive or restricted data into a generative AI tool. Employees who have questions about the class of data they are working with should talk to their supervisors.

Entering data into a generative AI tool or service is like posting that data on a public website. AI tools collect and store data from users as part of their learning process. Any data entered into an AI tool becomes part of its training data, which it may then share with other users outside the university.

Please see UW System Administrative Policy 1031, Information Security: Data Classification and Protection and UW System Administrative Procedure 1031.A, Information Security: Data Classification.

Bias and Discrimination

Because predictive language models are predictive and generate text similar to the text they were trained on, they risk perpetuating bias and discrimination in the way they construct language and in their word choices.

All content generated by an AI tool should be reviewed by at least two people — the prompt writer and their supervisor — to help control for bias.

It is always recommended that stakeholder review of marketing materials include a group that is diverse in race/ethnicity, class, gender, sexual orientation, country of origin, religion and other characteristics. UWM’s Division of Diversity, Equity, and Inclusion is available as a resource to help facilitate the review of materials.

Production Process

The production process below is recommended for employees using ChatGPT and other AI tools to generate marketing content.

  1. Provide others working on the project and your supervisor with a brief written description of how AI will be used. Update this description as needed during the project lifecycle.
  2. Using reliable primary and secondary sources, gather the information needed to create robust, UWM-specific content. Examples include application deadlines, program requirements, enrollment or graduation numbers, rankings and accreditation information.
  3. Write a series of prompts to guide the tool in producing the content. Include the target audience, appropriate reading level, length guidelines, delivery channel (webpage, brochure, Facebook post, etc.), and the facts used as input. Have the content generated in English first, and then have it translated as needed.
  4. Revise the content as appropriate. For example, you might draw from two or three AI-generated versions of the content to compile a final version produced by you (a human).
    • In general, AI-generated content should not be used without revision. It tends to have some elements of non-human awkwardness in its phrasing that can be off-putting and diminish effectiveness.
    • Consider having the AI-generator review your revised content for brevity and grammar.
  5. Review and confirm the accuracy of all the facts included in the content.
  6. Use Hemingway or another tool to check the reading level to confirm that it’s appropriate for your audience.
  7. Use the UWM Editorial Style Guide and AP Stylebook as appropriate to ensure consistency with UWM brand guidelines. These style guides cover things like the proper way to format times and dates, as well as job titles, school/college names and departments.
  8. Submit the content for review by your supervisor. Include additional stakeholders as appropriate for the project. If the content has been translated into a language other than English, have it reviewed by a native speaker.
  9. Use feedback from your supervisor, stakeholders and others to revise the content. At this point, it is likely most appropriate for the work to be done by a person, not by AI.
  10. Submit the revised draft to your supervisor and at least one other person for final review.
  11. Have the final version proofread.
    • MarComm provides professional proofreading services in English.
    • Proofreading in languages other than English may be available from external agencies.
  12. Retain a copy of the description of how AI was used during the project in case questions arise.


AI Content Detector:

AP Stylebook (accessible from UWM IP addresses):

eBook on writing ChatGPT prompts:

HubSpot article on writing AI prompts:


Guidance on using ChatGPT for translation:

UWM Editorial Style Guide:

UWM Guidance on Artificial Intelligence (AI) and Teaching: