How good is the advice from ChatGPT for building science? Comparison of four scenarios
Author(s): |
Adam Rysanek
Zoltan Nagy Clayton Miller Aysegul Demir Dilsiz |
---|---|
Medium: | journal article |
Language(s): | English |
Published in: | Journal of Physics: Conference Series, 1 November 2023, n. 8, v. 2600 |
Page(s): | 082006 |
DOI: | 10.1088/1742-6596/2600/8/082006 |
Abstract: |
This paper resulted from several questions discussed between its human authors shortly after the public launch of OpenAI’s ChatGPT: Can a language model, trained on an unimaginably vast database, be able to resolve fundamental data inference and data-driven forecasting problems which have been ’typical’ research fare in the building science domain? Is it possible that research problems which ’typically’ require user-intensive tools, such as building performance simulation and problem-specific machine learning models, can today be addressed by ChatGPT in a manner of seconds? If so, what does this mean for the future of building science, let alone the writing of novel research contributions in academia? The entirety of this paper was produced with significant use of ChatGPT. Four arbitrarily-selected case studies were extracted from recent peer-reviewed journals and reputable sources. ChatGPT was tasked with attempting to infer the same results as the publications using only each case study’s input data. Not only were ChatGPT’s results found to be relatively credible, ChatGPT was able to communicate its results instantly and in an academic language. From start to finish, the entirety of this paper, from initial brainstorming to final editing, was completed in no more than 8 human-hours by the study’s (human) authors. The content of this paper is original and has not been published previously. |
- About this
data sheet - Reference-ID
10777669 - Published on:
12/05/2024 - Last updated on:
12/05/2024