Jeremy Lewitzke
Management Services Committee Member
L&S Electric, Inc.
Schofield, Wisconsin
The topic of Artificial Intelligence (AI) has been prevalent on a near daily basis in many business news channels, blogs and podcasts in the past 18+ months. The public debut of OpenAI’s ChatGPT in November 2022 brought a new level of public awareness to the capabilities of computer learning systems. Since then, we have seen a deluge of systems from large tech companies utilizing this technology. But just like everything else that moves fast in businesses, appropriate rules, guidelines and policies focused on how best to utilize the tool tend to lag usage and implementation. AI can have legal and customer impacts that may be unexpected if not taken into consideration early in its usage with your business.
Use Cases in EASA Organizations
Many companies are using or considering these tools for things like:
- Translating RFPs or data from other languages
- Drafting marketing materials and customer communications
- Automating manual data entry processes
- Developing training materials or safety, on-boarding or work processes
- Data analysis
The time and effort many roles put into these areas reflects notable portions of their day-to-day work and translates to impactful budget spending.
Issues to Consider
- The large language models (such as ChatGPT) utilize data from many sources to build their systems. These sources can range from professional, peer-reviewed papers to amateur blogs and social media accounts. This means there can be results produced by the models that may or may not be factual or correct.
- Usage of the systems helps to train and educate it for the future. In most of the free to use systems, this means that information you put into the model becomes part of the system and can be found by other users. This is particularly important when we consider the proprietary nature of the work we often do for customers.
- Employees are often concerned about how technology will impact their future livelihood.
Developing AI Policies
Forming an understanding of systems and guidelines around its usage is important to the successful use of these new technologies in your organization. Some key areas to consider when creating the policy include:
- Security Levels – What information can you safely put into the various systems?
- Responsibility – The output of these systems should be meant to assist in doing tasks, not be self-sufficient in doing tasks on its own. The user is still responsible for the content created and its validity.
- Transparency – Trust is important and being clear that you are using AI or other technology to create content and citing sources is critical.
- Build Excitement – While there are a lot of cautions to consider, these are still incredibly exciting tools for our businesses. Create ways where people can experiment (safely) to find ways to improve performance in your business.
Related Reference and Training Materials
Print