Conference Agenda

Valley City State University
Thursday, Sept. 26, 2024

8-8:30 a.m.

Registration

Foyer of Center

8:30 - 8:45 a.m.

Welcome/Announcements

Performance Hall

8:45 - 9:45 a.m.

Hon. James Baker

Performance Hall

"Promise and Peril: A Public Citizen's Guide to AI"

China’s State Council has declared that China will be the world’s leader in Artificial Intelligence by 2030. Vladimir Putin has stated that “whoever becomes the leader in this sphere [AI] will become the ruler of the world.” A Belfer Center/IARPA study concluded AI will be as transformative a military technology as aviation and nuclear weapons before. Stephen Hawking remarked, “powerful artificial intelligence will either be the best, or the worst thing, to happen to humanity.” How will AI change our lives and our society? Considering these realized and potential benefits and risks, in what manner should law, ethics, and policy regulate AI? How will AI impact nation-states and more importantly, how will AI impact national security? And, for our immediate purposes, what is it that public citizens - like us - should know about this AI revolution? Judge Jamie Baker will address these questions and more during his keynote presentation. He is the Director of the Syracuse University Institute for Security Policy and Law as well as a Professor at the Syracuse College of Law and the Maxwell School of Citizenship and Public Affairs. He also serves as a judge on the Data Protection Review Court.

9:45 - 10 a.m.

Morning Break

Foyer of Center

10 - 11 a.m.

Dr. Eman El-Sheikh

Performance Hall

"Building an AI-Enabled Future Workforce"

Dr. Eman El-Sheikh’s talk, titled “Building an AI-enabled Future Workforce,” will focus on how AI is transforming the workforce and how academic institutions can prepare skilled professionals who recognize AI’s benefits and pitfalls and are ready to use technologies across all sectors and work roles. Attendees will gain insights into strategies for creating an AI-enabled workforce and the integration of AI to enhance productivity, drive innovation, and foster a culture of continuous learning. We will explore how AI can enhance specific fields, such as cybersecurity, and how higher education can adapt to prepare competent AI-ready professionals. Participants will be equipped with actionable strategies to harness AI's potential and cultivate a dynamic, future-ready workforce. El-Sheikh is the associate vice president and professor for the Center for Cybersecurity at the University of West Florida.

11 - 11:45 a.m.

Fireside Chat

Performance Hall

North Dakota University System Chancellor          Mark Hagerott

Monsignor James Patrick Shea, University of Mary President

11:35 a.m. -
12:45 p.m.

Lunch

Cafeteria, Student Union

12:45 - 1:25 p.m.

Breakout Sessions

ROOM: CFA 240

The influx of sensationalized information about AI can be overwhelming, but the public and our stakeholders expect rapid deployment of AI solutions to solve a myriad of problems. This round table session brings together ND’s Automation and Data Science teams to collaborate and ideate practical applications of these fields within state government and similar entities. We will consider actionable use cases as well as the unique challenges that AI deployment in government can introduce. The format will be a dynamic question-and-answer session aimed at fostering interactive dialogue and sharing insights. 

Panel Discussants

Jason D. Anderson brings 20 years of industry experience with 17 years of those with State of North Dakota. He has the privilege of leading NDIT’s AI and Automation efforts and has a passion for data, technology, and building great teams. He is a father of five and married to is lovely wife Cathryn for 12 years. Jason also serves on several Science and Technology boards and takes an active role in his community.

 

 

Isaac Trottier started his career maintaining legacy java web applications, but his passion for emerging technologies led him to the AI and Automation team where he now utilizes robotic process automation to automate several business processes saving the state time and money. He has a bachelor’s degree in computer science as well as seven years of experience with the state. Isaac has been very active in the exploration and development of AI technologies within the state and has been central in the development of the state’s first attempt at a generative AI chatbot.

 

Sam Unruh blends a decade of expertise as a data scientist at the North Dakota Department of Information Technology with a passion for education and statistics. He earned his bachelor’s degree in Mathematics and Mathematics Education from the University of Mary, followed by a Master of Science in Applied Statistics from North Dakota State University. Sam is enthusiastic about translating complex data into understandable insights, making it accessible for various audiences. Alongside his role informing state-level initiatives in education and workforce development, Sam has also served as an adjunct statistics instructor at the University of Mary since 2016.

 

Michael McCrory currently serves the State of North Dakota as the Product Owner for the Robotic Process Automation Team. Specializing in the implementation of automation and artificial intelligence projects within government agencies, Michael leverages his passion for emerging technologies to enhance citizens’ daily lives. He has a major in Computer Science and a minor in sociology from NDSU, along with seven years of experience in state government. he effectively identifies and integrates these technologies into state processes. Michael also contributes to ongoing discussions about the ethical and practical implications of AI in government and is instrumental in leading NDIT’s first Generative AI Chatbot project.

Jill K. Baber currently serves the State of North Dakota as Product Owner for NDIT’s Data Science & Analytics Team, where she assists agencies in the art and science of data-driven decision-making, providing guidance on effectively leveraging AI and traditional analytics tools. Jill holds a Master’s Degree in Public Health and has worked for 14 years in the domain of data, as a data scientist, epidemiologist, and clinical researcher. Jill is a passionate advocate for data literacy—a foundational concept for an AI-driven future—and loves to explore the use of machine learning models for novel applications in state data.

 

Jessica Van Neste is a data scientist with eight years of experience working for North Dakota Information Technology. She holds a graduate degree in MS Experimental Psychology, specializing in cognition and story comprehension in children with ADHD. Jessica is passionate about working with Health and Human Services data, with a particular focus on Economic Assistance Programs, Child and Family Services, and Developmental Disabilities. Jessica is highly skilled in various data analysis tools, including Python and Power BI. Her expertise in these tools allows her to transform complex data into understandable insights for diverse audiences. Her work has been instrumental in driving significant improvements in service delivery for Programs like CCAP, TANF, LIHEAP, and Foster Care

ROOM: Choral

Faculty from the University of North Dakota’s Department of Languages and Global Studies will host an interactive panel discussion on teaching and learning global languages and cultures in the age of artificial intelligence.  These faculty members, who represent three different languages and cultures taught at UND, have been considering ways in which artificial intelligence can be utilized by students and faculty alike to improve learning outcomes and prepare for employment in the workforce. 

How can this powerful new technology be deployed to expand our worldview and pedagogical capabilities in a meaningful way?  Is it possible that this perceived threat to traditional educational structures could, paradoxically, act as a springboard to inspire creativity, empathy, and the kind of intercultural competencies that make us human?  Drs. Cole, Gjellstad and Knapp propose a panel discussion centered on these questions, in hopes of sharing both general and specific considerations of how North Dakota’s institutions of higher education can not only acknowledge the impact of artificial intelligence on teaching and learning but use it to prepare our students for a global future.  

The discussants will first engage the audience to gather their impressions of possible/perceived uses and abuses of AI in the classroom.  From there, each faculty member will share their concerns and corresponding pedagogical plans for dissuading students from the utilization of AI in some situations and encouraging it in others. This panel will be of interest not only to educators but also to anyone concerned with how artificial intelligence will impact the way we prepare the next generation to plan, build, create, and care for our North Dakota communities and citizens moving forward.  

Panel Discussants

Dr. Christie Cole, Assistant Professor of Spanish 

Dr. Melissa Gjellstad, Professor of Norwegian and Department Chair 

Dr. Thyra Knapp, Associate Professor of German Studies 

 

ROOM: CFA 222

The technology of artificial intelligence (AI) has rapidly evolved over the past decade from choosing and following algorithms (learning and reasoning AI processes) to creative AI processes that generate new material.  This rapid growth of generative AI has created excitement and concern for education. The proposed presentation would focus on the skills and competencies needed for individuals, including faculty and students, to utilize GenAI efficiently, effectively, and ethically. Presentation topics would include responsible and ethical use of AI, writing AI prompts, and teaching/learning strategies using AI.  

This aligns with the conference theme of Skills for the AI Era as it will explore the skills and competencies needed for individuals to successfully incorporate GenAI. The intended audience for this presentation would be researchers, industry professionals, leaders, policymakers, educators, and students. The presentation would include lecture with the use of slide technology and active participation with attendees by utilizing engaging questions, demonstrations, and examples. Objectives of the presentation include to identify key concepts of generative AI; explore the adaptability and creativity of AI learning experiences, examine AI teaching and learning strategies.  

It is crucial that individuals are equipped with the foundational knowledge and skills needed to utilize generative AI efficiently, effectively, and ethically. 

Presenter

Carey Haugen is a board-certified Clinical Nurse Specialist and serves as the Dean, MSN Coordinator, and assistant professor at Mayville State University.  She received $1000 Artificial Intelligence scholarship from the Dakota Digital Academy to complete the ACUE AI Quick  
Series. Carey has earned digital badges in effective college instruction and four digital badges specific to AI.  Additionally, she has completed AI research and recently provided a presentation on Generative AI to the Dakota Nursing Program. With over 20 years of nursing experience and 10 years of teaching experience, Carey hopes to inspire others to utilize AI efficiently, effectively, and ethically. 

Performance Hall

This presentation will focus on the potential of custom GPT models to leverage the power of generative AI while ensuring human creators maintain oversight, creative control, and focus on specific outcomes. It introduces what custom GPTs are and the basics for creating one, emphasizing how these tailored AI models can be designed to address specific tasks and objectives. The session will highlight the importance of designing interfaces that foster meaningful human-AI interaction, rather than replacing human connections. By exploring examples in education, research, and professional development, participants will see how custom GPTs can enhance learning, support creativity, and foster collaboration. The presentation will also address potential challenges and ethical considerations, offering guidelines for responsible use. Attendees will leave with resources to continue exploring and potentially build their own custom GPTs. All are welcome to attend this session, as getting started with custom GPTs does not require significant technical expertise.

Presenter

Anna Kinney is the coordinator of the University Writing Program at the University of North Dakota. In her position she runs the Writing Center, coordinates the Writing Across the Curriculum program, and helps to facilitate writing across campus by leading writing events and workshops. A significant focus of her work is supporting pedagogical and curriculum development in the changing world of generative AI.

1:30 p.m. - 2:10 p.m.

Breakout Sessions

ROOM: Performance Hall

First, we will explore “Skills for the AI Era,” highlighting the competencies needed for individuals to excel. As automation and AI technologies reshape job roles, adaptability becomes paramount. For example, professionals and academics need to constantly update their skills to leverage AI tools effectively. Additionally, we will emphasize the importance of ethical decision-making in guiding AI development and deployment, ensuring these technologies align with societal values. To this end, university curriculums must integrate courses on AI ethics, interdisciplinary problem-solving, and continuous learning methodologies.

Next, we will examine “Human-AI Collaboration.” Effective collaboration between humans and AI systems is key to maximizing their combined potential. This section will cover strategies for leveraging the unique strengths of both parties, such as AI’s data processing capabilities and human intuition and empathy. For instance, in healthcare, AI can assist with diagnostic processes, while doctors provide the nuanced understanding and compassionate care that machines cannot. We will present practical examples and frameworks for fostering synergistic partnerships that achieve shared goals, suggesting that current science and engineering disciplines incorporate more human-centric AI design principles into their programs.

Finally, we will investigate “Workplace Dynamics and Organizational Culture” in the context of AI integration. The impact of AI on job design, employee roles, and organizational culture is profound. For example, AI can enhance employee well-being by automating mundane tasks, allowing workers to focus on more meaningful and creative endeavors. We will analyze how AI is transforming industries like manufacturing, where collaborative robots (cobots) are becoming common, and propose solutions to create a positive and resilient workplace environment. This necessitates a redesign of university curriculums to include more experiential learning opportunities, such as internships with AI companies and project-based courses that reflect real-world AI applications.

Presenter

Dr. Harun Pirim received his Ph.D. in Industrial and Systems Engineering from Mississippi State University. He is currently an assistant professor at the Department of Industrial and Manufacturing Engineering, NDSU. His work focuses on leveraging machine learning, deep learning, graph theory, and discrete optimization for various applications. These include social media network analysis to identify individuals with specific traits, using machine learning and deep learning to predict the functions of hypothetical proteins, and conducting multilayer network analysis of power grids. He has published numerous papers in reputable journals and conference proceedings, authored several book chapters, and edited two books on data analytics. He founded the CELL (Connect, Elicit, Learn Lab) at North Dakota State University (NDSU). Outside his academic endeavors, he has a keen interest in science, philosophy, and metaphysics.

ROOM: Choral

The rapid pace of Artificial Intelligence development is drastically changing how society functions and how we view and value humans. Its impact is perhaps greater than any other technological development in its potential for change and disruption. Historically, these global disruptors, such as the printing press or nuclear capabilities, have fallen into two categories: those that negatively shape human existence and those that are harnessed to enhance it. In this panel, we will discuss the need to move beyond approaches to AI that focus on the technical aspects of AI development and view AI as an expedient tool for replacing human-based activities. This approach is limiting and dangerous. We wish to broaden and flip this perspective by emphasizing the human skills of collaboration, interdisciplinary perspective-making, and creativity as key to more effectively harnessing and using AI capabilities. We will argue against allowing AI to displace or shape our humans but, instead, for using our uniquely human skills to harness and apply AI in ways that promote – not replace – our strongest human abilities.

Specifically, this three-person panel from UND’s Initiative of AI and Human Innovation, will use evidence-based arguments to address issues of the emerging trends of AI replacement of humans in education, the workplace, and entertainment industries. We will advocate for flipping our narrative of human – AI interaction, emphasizing how the human skills of collaboration and perspective-taking can lead us to partner effectively with AI in the creation of stronger, more equitable educational practices, more robust work-place environments, and more inclusive and unique artistic endeavors. A Q&A session will be included.

In the 45-minute panel each member will bring a unique perspective to the discussion of a human-centered approach to AI. Professor Oliver, a theatre scholar will discuss the impact to the Arts and the potential for inclusion. Ms. Kinney an expert in writing and AI will explore pedogeological principals to incorporating AI in the classroom. Professor Carmichael, a scholar in interdisciplinary studies, English, and Theatre will connect the dots between research and the classroom to explore how the disruption of AI can expand our potential if utilized ethically.

Presenters

Emily Cherry Oliver is currently Professor and chair of the Theatre Arts Department at the University of North Dakota. In addition to her work in theatre, Emily is the Co-Director of the University of North Dakota’s AI and Human Innovation Initiative, where she explores the intersection of artificial intelligence and human creativity. Her areas of scholarship include directing and performance. Emily has an impressive portfolio of professional directing and acting credits and at UND, she has directed numerous productions. She is also currently serving as the co-vice chair for Region 5 of the Kennedy Center American College Theatre Festival.

Tami Carmichael, Professor of English and faculty member in the Theatre Department at the University of North Dakota, is a distinguished scholar specializing in interdisciplinary studies. She served as program director of the Integrated Studies program for seventeen years and is now the associate director of the University’s AI and Human Innovation Initiative, focusing on AI’s influence on pedagogy and assessment. Professor Carmichael also coordinates the American College of Norway program and has previously chaired the North Dakota Humanities Council, showcasing her extensive contributions to academia and leadership in educational innovation.

Anna Marie Kinney, serves as Coordinator of the University Writing Program, and brings experience supporting faculty in developing an understanding of AI and its impact on curriculum and pedagogy. At the University of North Dakota, she is Co-Director of the AI and Human Innovation Initiative and she leads workshops educating faculty on ethically navigating AI use for improved student outcomes. Additionally, she serves on the North Dakota University System AI Forum and have served the state as a speaker on AI in higher education. Her experience includes developing writing curriculum, supporting best practices for inclusive pedagogy, and supporting faculty development.

Room: CFA 222

Artificial Intelligence (AI) technologies have been utilized in healthcare for decades, assisting professionals in diagnosing, treating, and communicating with patients. AI technology has exploded greatly in recent years to include robotic surgery and the ability to trend data, thus leading to improved patient outcomes. There is tremendous potential in expanding AI technology but also substantial challenges that require ethical, financial, and professional consideration. Healthcare professionals care for people; the nursing profession meshes art and science to provide caring behaviors such as presence, touch, listening, and coming to know the patient. A humanoid robot that assists in lifting and turning patients has been utilized in Japan in elder care for the past two decades. But can this “hugging” robot provide the caring behaviors we all crave and need? This presentation will discuss the ongoing nursing shortage, AI technologies currently being used in healthcare, evaluate potential concerns with newly emerging AI technology, and share insights into the healing power of caring behaviors.

Presenter

Dr. Mary Beth Johnson obtained her BSN from the University of Mary (then Mary College) and began her nursing career in the Neonatal Intensive Care Unit at St. Alexius Medical Center in Bismarck. She obtained her Master’s in Perinatal Nursing from the University of Washington in Seattle. Mary Beth was in leadership positions including the Director of Maternal Child Nursing at St. Vincent Healthcare in Billings, MT for 18 years. Her nursing experience has always been in the Maternal-Child arena. Mary Beth began teaching at the University of Mary in the fall of 2007 and has taught courses in maternity, pediatrics, ethics, law and policy, and nursing leadership. Her interest in bioethics began during the early years of her nursing practice experience. She earned her Master’s in Health Policy in 2016 and her Doctorate in Bioethics in 2020 from Loyola University – Chicago. Mary Beth was an RN board member on the ND Board of Nursing from July 2015 until June 2022. She is a member of several professional nursing organizations.

ROOM: CFA 240

How do people with disabilities use technology to access digital content? The number of people with disabilities is estimated to be 15% of the global population (approximately 1 billion people). Dr. Lacey Long, Certified Assistive Technology Instructional Specialist (CATIS) and Consumer Advisory Council (CAC) Member for ND Assistive, will present information on how artificial intelligence (AI) technology can improve accessibility for individuals with disabilities.

Dr. Long will define and share examples of assistive technology (AT) and explain how these programs and products can be utilized in conjunction with artificial intelligence (AI), specifically for individuals who are blind or have low vision, individuals who are deaf or hard of hearing, individuals who have physical disabilities or mobility needs, and/or individuals who have learning differences or cognitive disabilities. She will also share information on ADA (the Americans with Disabilities Act) guidelines for state and local government agencies about the accessibility of web content and share resources for accessibility training, with a focus on using plain language and the four principles (POUR) of web accessibility.

Presenter

Dr. Lacey Long is a Research Associate for the North Dakota Center for Persons with Disabilities (NDCPD) at Minot State University. She is currently the Project Director for the ND Dual Sensory Project, the state’s deafblind project. Lacey has obtained a bachelor’s degree in Elementary Education, a master’s degree in Special Education, and a doctoral degree in Educational Practice and Leadership from the University of North Dakota. Lacey is a certified teacher of students with visual impairments, a teacher of the deaf/hard of hearing, an orientation and mobility specialist (COMS), and an assistive technology instructional specialist (CATIS). She lives in Bismarck, ND (where she was born and raised) with her husband, Jackson, and two sons, Reuben and Louis.

2:10 - 2:30 p.m.

Afternoon Break

Foyer of Center

2:30 - 3:10 p.m.

Breakout Sessions

ROOM: CFA 240

In the age of rapid technological advancements, organizations face the critical challenge of determining when to utilize automation and artificial intelligence (AI) and when to rely on human intervention. This presentation aims to explore a framework for making these decisions, ensuring efficiency, accuracy, and ethical considerations are upheld.

The decision to automate or rely on human expertise depends on several key factors, including the complexity of tasks, the necessity for creativity, the requirement for empathy, and the potential for ethical implications. This presentation will help identify best practices and criteria that organizations can adopt to make informed decisions.

Key considerations include:

Task Complexity and Predictability: Routine, repetitive tasks with high predictability are prime candidates for automation. Conversely, tasks requiring nuanced judgment, creativity, and problem-solving skills are better suited for human intervention.

Empathy and Emotional Intelligence: Activities involving customer service, caregiving, and counseling necessitate a human touch due to the inherent need for empathy and emotional intelligence.

Ethical and Legal Implications: Decisions with significant ethical ramifications, such as those in healthcare, law enforcement, and finance, should involve human oversight to navigate moral complexities and maintain accountability.

Quality and Error Management: Automation excels in enhancing precision and reducing human error in data-intensive processes. However, human oversight remains crucial for ensuring quality control and handling unforeseen anomalies.

The goal is to achieve a harmonious balance where automation enhances efficiency, and human capabilities drive innovation and ethical decision-making.
This presentation contributes to the broader discourse on the future of work, highlighting the evolving roles of humans and machines in a synergistic environment. Attendees will gain insights into strategic implementation, fostering a more efficient and ethically sound organizational landscape.

Presenters

Isaac Trottier started his career maintaining legacy java web applications, but his passion for emerging technologies led him to the AI and Automation team where he now utilizes robotic process automation to automate several business processes saving the state time and money. He has a bachelor’s degree in computer science as well as seven years of experience with the state. Isaac has been very active in the exploration and development of AI technologies within the state and has been central in the development of the state’s first attempt at a generative AI chatbot.

Michael McCrory currently serves the State of North Dakota as the Product Owner for the Robotic Process Automation Team. Specializing in the implementation of automation and artificial intelligence projects within government agencies, Michael leverages his passion for emerging technologies to enhance citizens’ daily lives. He has a major in Computer Science and a minor in sociology from NDSU, along with seven years of experience in state government. he effectively identifies and integrates these technologies into state processes. Michael also contributes to ongoing discussions about the ethical and practical implications of AI in government and is instrumental in leading NDIT’s first Generative AI Chatbot project.

ROOM: Performance Hall

This presentation explores the transformative power of AI in modern workplaces. It covers practical ways to leverage AI to enhance productivity, drive innovation, and improve decision-making processes. From automating routine tasks to providing insights through data analysis, AI is reshaping the way work is conducted. The discussion also addresses the broader implications of AI adoption, including its potential to create new job opportunities and the ethical considerations involved. Attendees will gain a comprehensive understanding of AI’s current capabilities and future potential, along with actionable strategies to implement AI tools effectively in their organizations.

Presenter

Entrepreneur, Tech Talk Show Host and Founder of National Day Calendar, Marlo Anderson, a serial entrepreneur, and futurist, is best known for his life’s motto, “Celebrate Every Day!” and the creation of nearly 400 new National Days including National Avocado Day, National Astronaut Day and National Bobblehead Day.

What began as a hobby blog, has become one of the most celebrated movements in history. National Days is now the #1 Trending topic of all time. The “Influencer to Influencers” trended least 450 times a year with giant topics like National Siblings Day, National Margarita Day, and National Puppy Day and has a reach in the hundreds of millions.

To date, over 20,000 media outlets and personalities follow the National Day Calendar’s daily post and use its content as prep for their daily shows.

This social media phenomenon is the host of the radio show The Tech Ranch and is affectionately known to his followers as “The Guru of Geek.” This unique platform has given Marlo first-hand knowledge in emerging technology trends and hardware.

With a passion for start-ups, Marlo has been directly involved in the creation of Talking Trail, Zoovio, Awesome 2 Products, Ace Putt, Carbon Convert and National Day Calendar.

Room: CFA 222

The presentation will include an overview of the research study, results, and application of results to maximize the shared potential of human-AI interaction. to determine nursing students’ and faculty’s perceptions of generative AI use in answering a nursing education discussion question by comparing a human written response to a chatbot written response. This research study assessed knowledge and attitudes of Generative AI in addition to a direct comparison of a human written response to a Generative AI chatbot. IRB (Institutional Review Board) approval was received on February 20, 2024. Data was collected from March 18 – May 18, 2024, and is currently being analyzed. Initial results show 125 nursing students and faculty participated in the research study. Analysis of the results will be completed by the time of the conference.

This aligns with the conference theme of Human-AI Collaboration as the results of the study may be examined and applied to build strategies for fostering productive collaboration between humans and AI chatbots to achieve shared goals of higher education. The intended audience for this presentation would be researchers, industry professionals, leaders, policymakers, educators, and students. The presentation would include lecture with the use of slide technology and active participation with attendees by utilizing engaging questions, demonstrations, and examples. Objectives of the presentation include identifying key concepts of generative AI; examining strategies for human-AI collaboration in education; and describing at least 2 perceptions of chatbot use in education to answer a nursing question.

It is crucial that individuals are equipped with the foundational knowledge and skills needed to utilize generative AI efficiently, effectively, and ethically.

Presenters

Carey Haugen is a board-certified Clinical Nurse Specialist and serves as the Dean, MSN Coordinator, and assistant professor at Mayville State University.  She received $1000 Artificial Intelligence scholarship from the Dakota Digital Academy to complete the ACUE AI Quick  
Series. Carey has earned digital badges in effective college instruction and four digital badges specific to AI.  Additionally, she has completed AI research and recently provided a presentation on Generative AI to the Dakota Nursing Program. With over 20 years of nursing experience and 10 years of teaching experience, Carey hopes to inspire others to utilize AI efficiently, effectively, and ethically. 

Dr. Mary Beth Johnson obtained her BSN from the University of Mary (then Mary College) and began her nursing career in the Neonatal Intensive Care Unit at St. Alexius Medical Center in Bismarck. She obtained her Master’s in Perinatal Nursing from the University of Washington in Seattle. Mary Beth was in leadership positions including the Director of Maternal Child Nursing at St. Vincent Healthcare in Billings, MT for 18 years. Her nursing experience has always been in the Maternal-Child arena. Mary Beth began teaching at the University of Mary in the fall of 2007 and has taught courses in maternity, pediatrics, ethics, law and policy, and nursing leadership. Her interest in bioethics began during the early years of her nursing practice experience. She earned her Master’s in Health Policy in 2016 and her Doctorate in Bioethics in 2020 from Loyola University – Chicago. Mary Beth was an RN board member on the ND Board of Nursing from July 2015 until June 2022. She is a member of several professional nursing organizations.

ROOM: Choral

In this presentation, we delve into the transformative impact of conversational AI models like ChatGPT on data analysis and data science. These sophisticated tools are reshaping how we extract insights, automate processes, and make decisions, significantly enhancing productivity and fostering innovation. I will share my personal experience with utilizing ChatGPT to design and implement predictive analytics models. Specifically, I’ll demonstrate how this technology assists in overcoming language barriers commonly encountered by data professionals transitioning between SQL and more complex programming languages like R or Python.

ChatGPT has proven indispensable in simplifying the complexities of data manipulation and analysis. By utilizing a tool, such as ChatGPT, I have been able to refine my limited R knowledge by interacting with the AI and translate my “wants and desires” into commands in R. However, the adoption of conversational AI models does not replace the need for education or understanding of data analytics, it merely allows someone to refine scripts and functions to achieve the final goal. AI tools may not provide fully functional scripts or code snippets, requiring human intervention to correct syntax errors or refinement of the dataset to accommodate the generated code.

Furthermore, we will discuss the management of privacy concerns when handling confidential information. Emphasizing the importance of understanding the data privacy policies of AI tools is crucial for ensuring the security of sensitive data. The session is designed to familiarize attendees with the practical uses of AI tools like ChatGPT, highlighting the necessity of critical human oversight.
We will address the limitations of AI-generated code or scripts, which, while not providing complete solutions, serve as valuable guides in achieving desired outcomes. By understanding these tools’ capabilities and boundaries, data professionals can better navigate the complex landscape of AI-assisted data science, ensuring data security and project success.

(This abstract was partially written and reviewed by ChatGPT)

Presenter

Jim Sorenson is the Director of Institutional Research for the University of Mary. He has a bachelor’s degree from American Military University in Environmental Science and is nearly complete with his master’s degree from Colorado State University in Data Analytics. Jim’s thesis was written on the use of predictive analytics in First Time in College (FTIC) freshman retention. Jim has a passion for data and has become a subject matter expert in Power BI.

3:20 - 4 p.m.

Breakout Sessions

Room: Performance Hall

As we rapidly advance into an era dominated by artificial intelligence (AI), the necessity for a data-literate population has never been more critical. This presentation will explore the fundamental importance of data literacy, emphasizing how it empowers individuals and organizations to make informed, responsible decisions in a world increasingly influenced by AI technologies. We will cover: how data literacy is crucial in ensuring that all individuals, regardless of their technical background, can engage with and benefit from AI advancements; how data literacy bridges the gap between complex AI systems and the public, fostering a culture of transparency and trust; and how data literacy also plays a vital role in mitigating biases within AI systems, as a well-informed populace can better recognize, question, and address potential disparities. By equipping society with the skills to critically assess and interpret data, we enable more equitable access to the benefits of AI, ensuring that its transformative power is harnessed for the common good.

Presenters

Jill K. Baber currently serves the State of North Dakota as Product Owner for NDIT’s Data Science & Analytics Team, where she assists agencies in the art and science of data-driven decision-making, providing guidance on effectively leveraging AI and traditional analytics tools. Jill holds a Master’s Degree in Public Health and has worked for 14 years in the domain of data, as a data scientist, epidemiologist, and clinical researcher. Jill is a passionate advocate for data literacy—a foundational concept for an AI-driven future—and loves to explore use of machine learning models for novel applications in state data. 

Jason D. Anderson brings 20 years of industry experience with 17 years of those with State of North Dakota. He has the privilege of leading NDIT’s AI and Automation efforts and has a passion for data, technology, and building great teams. He is a father of five and married to is lovely wife Cathryn for 12 years. Jason also serves on several Science and Technology boards and takes an active role in his community.

Room: Choral

Join Dr. Stacy Duffield, Director of the Office of Teaching and Learning at North Dakota State University, and William Grube, founder of Gruvy Education, to explore the ethical challenges of using AI for teaching and learning. This session will dive into the appropriate and inappropriate use of AI for educators and learners, the limits of AI detectors, and crafting policies for using AI. We will also explore strategies for teaching 21st-century skills, creating AI-resistant assignments, and integrating AI responsibly into classrooms.

We will examine the potential side-effects of AI, both positive and negative, through questions like, “What does it mean for learners to trade off the zone of proximal development for ease of access to knowledge creation?” and “How does AI contribute to misinformation?” This session will spark important discussion and enhance understanding of AI’s responsible and ethical use in education.

Presenters

Dr. Stacy Duffield is the director of the Office of Teaching and Learning at North Dakota State University. She holds advanced degrees in secondary and higher education, including literacy acquisition. Through her current role, she is involved with the use of AI in teaching and learning.

 

William Grube is the founder of Gruvy Education. He delivers AI in Education Training to over 80 K-12 schools. He received his undergraduate degree in computer science from North Dakota State University in the spring of 2024. William provides a unique perspective on preparing students for a world in which AI exists, focusing on practical applications and ethical considerations.

ROOM: CFA 240

There has been a lot of conversation around student use of generative AI, but instructors are also using these tools to enhance student learning and engagement. In this session we will explore how generative AI can assist educators in developing effective course materials that foster engagement, collaboration, and help develop learners’ critical AI and digital literacies. Opportunities to increase the accessibility and inclusiveness of activities and materials with AI will also be shared. A key part of our discussion will be examining AI bias and ethical considerations, with the goal being to foster a learning environment where generative AI enhances teaching practices responsibly and ethically.

Over the last year, we have offered multiple faculty development workshops on AI to support educators at the University of North Dakota, including several on using AI effectively and ethically to promote student engagement. We will share these strategies for using AI to create teaching materials in support of different learning outcomes and for different purposes, as well as share examples, insights, and practical approaches from faculty members who have integrated generative AI effectively into their classrooms. Though the conversation is focused primarily on faculty use of AI for their courses, the frameworks and methods shared are highly adaptable to fields outside of education, especially where training or professional development are critical for ongoing industry success.

Presenters

Dr. Anne Kelsch has served as Director of Faculty Development at the University of North Dakota since 2007. Her research focuses on new faculty and STEM faculty development. She is the 2022 recipient of UND’s Martin Luther King Jr. Social Justice Award and facilitates faculty development opportunities on best practices for AI in pedagogy and across the curriculum.

 

Anna Kinney has served as the Coordinator of the University Writing Program at the University of North Dakota since 2017. Her work focuses on inclusive writing pedagogy and supporting faculty across the curriculum in fostering excellence in writing. A significant focus of her work is supporting faculty as they navigate the changing pedagogical landscape due to AI.

ROOM: CFA 222

The presentation explores shadow technology, security considerations for generative AI, and considerations for safely and securely implementing generative AI in business processes to avoid the misuse of the technology. The misuse of generative AI can pose risks to organizations including data exposure or misinformation. An absolute ban on popular generative AI tools can add to security stress. Security stress can lead employees to rationalize information security policy (ISP) infractions and adopt shadow information technology (IT). Shadow IT is the use of unapproved technology by a department or individual without the knowledge of IT or the security group within the organization. The use of Shadow IT can lead to gaps in security due to data being stored in unknown locations, not understanding how data is used or routed in unapproved applications, or applications not meeting general security. To avoid the use of generative AI tools, such as Chat GPT, as shadow technology, organizations should consider analyzing how to implement the tools in a meaningful way that helps employees reach their goals. These considerations include identifying users, use cases, processes that may be affected, data that may be used through a systematic assessment of needs using the A+ Inquiry framework. The A+ Inquiry framework explores the following: absorbing the request, asking questions, accumulating information, accessing information, analyzing the data to answer questions, announcing findings, and applying the information to create a policy for generative AI that allows the safe use of the tool. The policy should clearly define users’ responsibilities and acceptable use of the tool. The goal is to create an organizational culture that uses generative AI tools to benefit the organization while preserving applicable data privacy guidelines. The intended audience is business leaders interested in security and in implementing generative AI solutions.

Presenters

Dr. Serena Pontenila has her PhD in Information Technology with a focus on Information Assurance and Security. She has experience as a Business Systems Analyst where she worked to promote employee adoption of new systems and processes while enforcing security requirements. She is currently an Assistant Professor at Minot State University in the Business Information Technology Department where she teaches courses including Systems Analysis and Design. Her experience and expertise in security, systems analysis, and systems and process adoption gives her a unique perspective on the uses of generative artificial intelligence for business purposes.

Dr. Nathan Anderson is the Director of Institutional Assessment and an adjunct faculty member in the Master of Education program at Minot State University. He holds a Ph.D. in Education with an emphasis on Institutional Analysis. His work focuses on identifying, developing, and implementing strategies to support effective data utilization and responsible decision-making. He is a developer of the A+ Inquiry framework as a broadly applicable inquiry model that synthesizes common stages of research, evaluation, and assessment.

4 p.m.

Closing Remarks

Performance Hall