This study aims to develop a visualization technique for performing arts convergence VR(Virtual Reality) content based on arts and culture AI(Artificial Intelligence) data cases. VR content is undergoing new changes due to the development of contactless environments and 5G technologies. In the field...
This study aims to develop a visualization technique for performing arts convergence VR(Virtual Reality) content based on arts and culture AI(Artificial Intelligence) data cases. VR content is undergoing new changes due to the development of contactless environments and 5G technologies. In the field of arts and culture, the VR technique is applied when developing "on-tact(online contact)" content. AI produces new cases by converging arts and culture. For research, it is necessary to produce converging content with performance exhibition data. The model sample conducted in this study analyzes arts and culture data and derives a visualization result. Using VR technique, scenes filmed with a 360° camera are edited with After Effects and Premiere to produce an equirectangular projection research model sample. Uploading the research model sample to social media converts it into 360° VR content. Users appreciate the research model sample with HMD(Head Mounted Display). The resulting visualization technique is expected to affect the planning and development of arts and culture on-tact contents. It also becomes IP(Intellectual Property rights) and can be used for arts and culture management.
This study has four goals. First, the whole direction of the study is set based on the analysis of previous studies. It focuses on arts and culture, VR, AI, and data. Second, I select and compare cases of arts and culture contents using VR and AI technologies to derive results. Third, I set AI data cases by conducting a performance exhibition data analysis. Based on this, I design a converging research model sample. Finally, a research model sample is produced in a stereoscopic VR format.
The study consists of five chapters. Chapter 1 presents the necessity, purpose, method, and subject of the study. Chapter 2 examines the background and cases by field, and studies applied cases through comparison. Chapter 3 describes the entire process of making a research model sample, focusing on the arts and culture AI data analysis process and the VR model-making process. Chapter 4 presents a visualization technique for the research model sample. It summarizes virtual lighting, camera application techniques in a 3D environment, and stereoscopic production techniques. Chapter 5 derives a research model based on the research result. Based on the model, I explore implications and future directions.
Previous research was divided into two perspectives. The first was the technical aspect of converging content. Previous research analysis was conducted on the development of the VR technique and the interface of immersive content. AI technique was summarized based on previous research on the concept and type of data. The VR imaging technique was based on previous research on cubemap and equirectangular projection.
The second previous research was a case of convergence between on-tact performance arts and VR arts and culture. There was previous research that looked for metaverse service analysis and utilization in educational aspects. Previous studies' cases on the convergence of arts and culture with AI were summarized.
Contents and service cases were presented by analyzing the theoretical background and related case studies. Based on case studies, indicators based on characteristics and components were selected and compared. As a theoretical background, techniques of AI, XR(Extended Reality), VR, and on-tact phenomena were considered. VR performance exhibition contents, metaverse services, on-tact performance exhibition contents, and arts and culture AI data-based cases were investigated. The analysis standards of the cases were derived by sector. Contents and service cases were comparatively analyzed and presented statistically according to the standards for each sector. As a result, four analysis results appeared.
First, in on-tact content, securing the profitability of video content was recorded in high scores. In particular, the percentage of securing profitability was highest in the performance field, and it was observed that profitability would affect the development of on-tact performance exhibition contents.
Second, in VR content, performance content produced in a virtual space using a pre-render method received high scores. The research model sample was produced in a pre-render method that composed a virtual space with computer graphics. I thought that pre-render was the optimal content considering the characteristic of VR.
Third, in the metaverse service, the scores for social activities and performance exhibition contents were high. In this case, social networking activity on the web was commonly found. In the future, content would be developed in a way that was connected and communicated with reality. The metaverse service performed an optimal role. Even within the virtual space, it maintained a connection with reality, engaged in social networking activities, and generated revenue.
Fourth, an AI technique using unstructured data received high scores in the case of arts and culture AI content. Unstructured data went through an analysis process to make results and gain meaning and value. AI data analysis was a technique that made useful use of unstructured data. The cases were investigated based on AI data that analyzed images, texts, and sounds in the field of arts and culture. All of these were analyzed with unstructured data.
Based on the analysis results here, a convergence content visualization technique was developed. Image and video sources were secured to be applied to the virtual exhibition hall. It mainly focused on data sources related to the performing arts. A work procedure was presented for visualizing and analyzing arts and culture AI data. A data analysis tool was used to represent text-centered unstructured data as a visualization image. I utilized big data related to user data-driven popular performance exhibition hobbies by age. The data consisted of information about performance exhibition hobbies that all ages from teenagers to 60s were interested in over a certain period of time. Visualized data in the form of images was implemented through Google Data Analysis Studio. Visualization data became a source to be used in 3D(three dimensions) virtual space.
After obtaining the visualization data, a research model sample was produced based on the development process. A virtual VR space was constructed using After Effects and Premiere. Space consisted of three parts. The first was the lobby. The research model sample was produced as a converging content for performance exhibitions using VR and AI techniques. It was designed to appreciate the exhibition label and video that explained it. The exhibition commentary, exhibition introduction video, and exhibition image constituted the space. They occupied one side of the lobby.
The second was the hallway. The hallway connected the spaces and played the same role as the exhibition space. With the data analysis tool, the contents of exhibitions and performances were derived and appreciated. Eight exhibition sources are placed on both walls. There was a structure that embodied the title of the research model in embossed form, four exhibits that captured the image of the performance hall, and two exhibits captured the appearance of the performance with a 360˚ camera. The rest of the exhibits were visualization images derived from AI data analysis work.
The third was the exhibition space. Images and videos related to performances were composed as exhibition content. Exhibition commentaries, vinyl structures, one performance video screen, and multiple images were placed in the space. By using video, image, and text as the main exhibition elements, performance information was provided to the audience.
In the actual application stage of the visualization technique, 3D space modeling, the use of a virtual camera, and VR video design went through. 3D space modeling was the task of producing a virtual space to be used as an exhibition space. Six cubes were implemented using 3D layers. The research model sample consisted of one main lobby, one exhibition hallway, and four exhibition spaces. Cubemap technique and virtual lighting technique were used. The cubemap technique was a work to compose one cube by giving a 3D layer effect to a 2D solid layer composed of six. The cube was produced by adjusting the layer values in the X-axis, Y-axis, and Z-axis directions through the 3D layer. The virtual lighting technique used ambient light, point light, and a spotlight. It was the operation of realizing a space with a texture similar to reality. The ambient light was the main light that set the brightness of the whole environment. Point light distributed the light evenly. It played a similar role to a fluorescent lamp and was optimized for light and darkness expression. The spotlight was the most used in the research model sample. It appeared in the form of a cone to emphasize the subject. The position and angle of the lighting were adjusted and placed in the virtual space.
The scene of the completed research model sample changed as space moves in the equirectangular projection environment. Virtual camera layers were used for the role of exploring and moving spaces. Adjusted the angle and position of the camera layer in the After Effects environment. And I used the keyframe to create movement. Position and rotation values were assigned based on the up, down, left, and right directions through the orbit camera tool. By applying the acceleration keyframe, the position movement and rotation of the camera layer were realized naturally. The position value was changed 13 times and the direction value was changed 12 times.
It transformed 3D space into VR content format through VR video design. In After Effects, scenes were transformed into an equirectangular projection using a dedicated plug-in optimized for VR editing. Four output scenes were merged as a single VR video in Premiere. Four scenes, intro, and ending were cut and edited. They were again converted from monoscopic to stereoscopic. Here, a stereoscopic conversion technique was used. Stereoscopic was outputted the result in 3D VR format considering binocular parallax. This was embodied using VR projection effects. Finally, two equirectangular projection images were placed at the top and bottom to output the research model sample.
In conclusion and suggestion, four research models will be derived through comparison of application cases and development and application of visualization techniques.
First, a performance exhibition converging model is built based on the research model sample. The model is a cross-functional flowchart based on the user experience of viewing performances and exhibitions in virtual space.
Second, the metaverse performance exhibition model is presented. I will itemize the conditions for the metaverse platform and its contents. If this condition is met, communication between users will occur in VR and XR-based performance, exhibition, and educational content.
Third, design an arts and culture AI converging content model. AI data will be used to analyze big data in performance exhibitions, and XR will be combined with the results of text, images, and videos to reconstruct it into performance exhibition content.
Fourth, an on-tact education model is built. From the point of view of the instructor and the learner, I present the conditions for communication occurring in the virtual space. Based on four research models, I will develop expression techniques for arts and culture, AI data-driven converging content. In the future, arts and culture AI data analysis can be used to explore the direction of the implementation of an XR-based performance exhibition content platform.
As for academic implications, discussions will be conducted in terms of techniques, contents, and expression techniques. The technical implications are in the construction of virtual exhibition space in a 3D environment with VR technique. This can be applied to the on-tact environment for follow-up research. The content part implies that it is possible to discuss convergence studies by applying techniques to arts and culture. A model that combines performance exhibition and technique was derived from the research model sample. It can be applied to the field of arts and culture to research as a method of expression techniques of works. Expression techniques were systematically presented through the research results. The research model sample was created using VR and AI techniques. The AI data used at this time will be available for academic reference in future arts and culture data analysis work.
Practical implications can be found in terms of performance exhibition convergence, metaverse performance exhibition, arts and culture AI convergence, and on-tact education. It will use VR and AI techniques to influence the production of on-tact performance exhibition content. The research model sample is produced using pre-render to create a virtual exhibition space. By applying interactive technique and ambisonics sound here, it is possible to produce XR content with multimedia characteristics to achieve practical development. From the metaverse performance exhibition perspective, it is possible to observe the implications of the relationship between XR and social media. XR services applied with social media features will connect reality with virtual reality to explore various economic values. From an arts and culture AI perspective, I develop a new visualization technique using deep learning and machine learning techniques. The data analysis technique is applied to the development of a user-centered arts and culture XR platform by identifying the public's needs for arts and culture content. In terms of on-tact education, the research model sample can be used as educational materials. As a way to stimulate students' interest, education will be conducted using immersive content that provides audiovisual stimulation.
The future research direction should be set with the XR metaverse platform using AI data analysis content. The first procedure is to set the concept of the service. The platform concept focuses on user experience and education based on arts and culture. The second procedure is related to future research projects. I implement detailed planning based on the concept. Further research is needed on how to generate profits through services and use them as promotional marketing tools from the perspective of arts and culture management. As the third procedure, the direction of establishing future research tasks is sought in the technical field. Research is needed on mitigating the symptoms of side effects caused by watching VR content for a long time. In relation to technology development, performance exhibition content patents and IP can be used as research tasks to give value to arts and culture management.
This study aims to develop a visualization technique for performing arts convergence VR(Virtual Reality) content based on arts and culture AI(Artificial Intelligence) data cases. VR content is undergoing new changes due to the development of contactless environments and 5G technologies. In the field of arts and culture, the VR technique is applied when developing "on-tact(online contact)" content. AI produces new cases by converging arts and culture. For research, it is necessary to produce converging content with performance exhibition data. The model sample conducted in this study analyzes arts and culture data and derives a visualization result. Using VR technique, scenes filmed with a 360° camera are edited with After Effects and Premiere to produce an equirectangular projection research model sample. Uploading the research model sample to social media converts it into 360° VR content. Users appreciate the research model sample with HMD(Head Mounted Display). The resulting visualization technique is expected to affect the planning and development of arts and culture on-tact contents. It also becomes IP(Intellectual Property rights) and can be used for arts and culture management.
This study has four goals. First, the whole direction of the study is set based on the analysis of previous studies. It focuses on arts and culture, VR, AI, and data. Second, I select and compare cases of arts and culture contents using VR and AI technologies to derive results. Third, I set AI data cases by conducting a performance exhibition data analysis. Based on this, I design a converging research model sample. Finally, a research model sample is produced in a stereoscopic VR format.
The study consists of five chapters. Chapter 1 presents the necessity, purpose, method, and subject of the study. Chapter 2 examines the background and cases by field, and studies applied cases through comparison. Chapter 3 describes the entire process of making a research model sample, focusing on the arts and culture AI data analysis process and the VR model-making process. Chapter 4 presents a visualization technique for the research model sample. It summarizes virtual lighting, camera application techniques in a 3D environment, and stereoscopic production techniques. Chapter 5 derives a research model based on the research result. Based on the model, I explore implications and future directions.
Previous research was divided into two perspectives. The first was the technical aspect of converging content. Previous research analysis was conducted on the development of the VR technique and the interface of immersive content. AI technique was summarized based on previous research on the concept and type of data. The VR imaging technique was based on previous research on cubemap and equirectangular projection.
The second previous research was a case of convergence between on-tact performance arts and VR arts and culture. There was previous research that looked for metaverse service analysis and utilization in educational aspects. Previous studies' cases on the convergence of arts and culture with AI were summarized.
Contents and service cases were presented by analyzing the theoretical background and related case studies. Based on case studies, indicators based on characteristics and components were selected and compared. As a theoretical background, techniques of AI, XR(Extended Reality), VR, and on-tact phenomena were considered. VR performance exhibition contents, metaverse services, on-tact performance exhibition contents, and arts and culture AI data-based cases were investigated. The analysis standards of the cases were derived by sector. Contents and service cases were comparatively analyzed and presented statistically according to the standards for each sector. As a result, four analysis results appeared.
First, in on-tact content, securing the profitability of video content was recorded in high scores. In particular, the percentage of securing profitability was highest in the performance field, and it was observed that profitability would affect the development of on-tact performance exhibition contents.
Second, in VR content, performance content produced in a virtual space using a pre-render method received high scores. The research model sample was produced in a pre-render method that composed a virtual space with computer graphics. I thought that pre-render was the optimal content considering the characteristic of VR.
Third, in the metaverse service, the scores for social activities and performance exhibition contents were high. In this case, social networking activity on the web was commonly found. In the future, content would be developed in a way that was connected and communicated with reality. The metaverse service performed an optimal role. Even within the virtual space, it maintained a connection with reality, engaged in social networking activities, and generated revenue.
Fourth, an AI technique using unstructured data received high scores in the case of arts and culture AI content. Unstructured data went through an analysis process to make results and gain meaning and value. AI data analysis was a technique that made useful use of unstructured data. The cases were investigated based on AI data that analyzed images, texts, and sounds in the field of arts and culture. All of these were analyzed with unstructured data.
Based on the analysis results here, a convergence content visualization technique was developed. Image and video sources were secured to be applied to the virtual exhibition hall. It mainly focused on data sources related to the performing arts. A work procedure was presented for visualizing and analyzing arts and culture AI data. A data analysis tool was used to represent text-centered unstructured data as a visualization image. I utilized big data related to user data-driven popular performance exhibition hobbies by age. The data consisted of information about performance exhibition hobbies that all ages from teenagers to 60s were interested in over a certain period of time. Visualized data in the form of images was implemented through Google Data Analysis Studio. Visualization data became a source to be used in 3D(three dimensions) virtual space.
After obtaining the visualization data, a research model sample was produced based on the development process. A virtual VR space was constructed using After Effects and Premiere. Space consisted of three parts. The first was the lobby. The research model sample was produced as a converging content for performance exhibitions using VR and AI techniques. It was designed to appreciate the exhibition label and video that explained it. The exhibition commentary, exhibition introduction video, and exhibition image constituted the space. They occupied one side of the lobby.
The second was the hallway. The hallway connected the spaces and played the same role as the exhibition space. With the data analysis tool, the contents of exhibitions and performances were derived and appreciated. Eight exhibition sources are placed on both walls. There was a structure that embodied the title of the research model in embossed form, four exhibits that captured the image of the performance hall, and two exhibits captured the appearance of the performance with a 360˚ camera. The rest of the exhibits were visualization images derived from AI data analysis work.
The third was the exhibition space. Images and videos related to performances were composed as exhibition content. Exhibition commentaries, vinyl structures, one performance video screen, and multiple images were placed in the space. By using video, image, and text as the main exhibition elements, performance information was provided to the audience.
In the actual application stage of the visualization technique, 3D space modeling, the use of a virtual camera, and VR video design went through. 3D space modeling was the task of producing a virtual space to be used as an exhibition space. Six cubes were implemented using 3D layers. The research model sample consisted of one main lobby, one exhibition hallway, and four exhibition spaces. Cubemap technique and virtual lighting technique were used. The cubemap technique was a work to compose one cube by giving a 3D layer effect to a 2D solid layer composed of six. The cube was produced by adjusting the layer values in the X-axis, Y-axis, and Z-axis directions through the 3D layer. The virtual lighting technique used ambient light, point light, and a spotlight. It was the operation of realizing a space with a texture similar to reality. The ambient light was the main light that set the brightness of the whole environment. Point light distributed the light evenly. It played a similar role to a fluorescent lamp and was optimized for light and darkness expression. The spotlight was the most used in the research model sample. It appeared in the form of a cone to emphasize the subject. The position and angle of the lighting were adjusted and placed in the virtual space.
The scene of the completed research model sample changed as space moves in the equirectangular projection environment. Virtual camera layers were used for the role of exploring and moving spaces. Adjusted the angle and position of the camera layer in the After Effects environment. And I used the keyframe to create movement. Position and rotation values were assigned based on the up, down, left, and right directions through the orbit camera tool. By applying the acceleration keyframe, the position movement and rotation of the camera layer were realized naturally. The position value was changed 13 times and the direction value was changed 12 times.
It transformed 3D space into VR content format through VR video design. In After Effects, scenes were transformed into an equirectangular projection using a dedicated plug-in optimized for VR editing. Four output scenes were merged as a single VR video in Premiere. Four scenes, intro, and ending were cut and edited. They were again converted from monoscopic to stereoscopic. Here, a stereoscopic conversion technique was used. Stereoscopic was outputted the result in 3D VR format considering binocular parallax. This was embodied using VR projection effects. Finally, two equirectangular projection images were placed at the top and bottom to output the research model sample.
In conclusion and suggestion, four research models will be derived through comparison of application cases and development and application of visualization techniques.
First, a performance exhibition converging model is built based on the research model sample. The model is a cross-functional flowchart based on the user experience of viewing performances and exhibitions in virtual space.
Second, the metaverse performance exhibition model is presented. I will itemize the conditions for the metaverse platform and its contents. If this condition is met, communication between users will occur in VR and XR-based performance, exhibition, and educational content.
Third, design an arts and culture AI converging content model. AI data will be used to analyze big data in performance exhibitions, and XR will be combined with the results of text, images, and videos to reconstruct it into performance exhibition content.
Fourth, an on-tact education model is built. From the point of view of the instructor and the learner, I present the conditions for communication occurring in the virtual space. Based on four research models, I will develop expression techniques for arts and culture, AI data-driven converging content. In the future, arts and culture AI data analysis can be used to explore the direction of the implementation of an XR-based performance exhibition content platform.
As for academic implications, discussions will be conducted in terms of techniques, contents, and expression techniques. The technical implications are in the construction of virtual exhibition space in a 3D environment with VR technique. This can be applied to the on-tact environment for follow-up research. The content part implies that it is possible to discuss convergence studies by applying techniques to arts and culture. A model that combines performance exhibition and technique was derived from the research model sample. It can be applied to the field of arts and culture to research as a method of expression techniques of works. Expression techniques were systematically presented through the research results. The research model sample was created using VR and AI techniques. The AI data used at this time will be available for academic reference in future arts and culture data analysis work.
Practical implications can be found in terms of performance exhibition convergence, metaverse performance exhibition, arts and culture AI convergence, and on-tact education. It will use VR and AI techniques to influence the production of on-tact performance exhibition content. The research model sample is produced using pre-render to create a virtual exhibition space. By applying interactive technique and ambisonics sound here, it is possible to produce XR content with multimedia characteristics to achieve practical development. From the metaverse performance exhibition perspective, it is possible to observe the implications of the relationship between XR and social media. XR services applied with social media features will connect reality with virtual reality to explore various economic values. From an arts and culture AI perspective, I develop a new visualization technique using deep learning and machine learning techniques. The data analysis technique is applied to the development of a user-centered arts and culture XR platform by identifying the public's needs for arts and culture content. In terms of on-tact education, the research model sample can be used as educational materials. As a way to stimulate students' interest, education will be conducted using immersive content that provides audiovisual stimulation.
The future research direction should be set with the XR metaverse platform using AI data analysis content. The first procedure is to set the concept of the service. The platform concept focuses on user experience and education based on arts and culture. The second procedure is related to future research projects. I implement detailed planning based on the concept. Further research is needed on how to generate profits through services and use them as promotional marketing tools from the perspective of arts and culture management. As the third procedure, the direction of establishing future research tasks is sought in the technical field. Research is needed on mitigating the symptoms of side effects caused by watching VR content for a long time. In relation to technology development, performance exhibition content patents and IP can be used as research tasks to give value to arts and culture management.
주제어
#Culture Art AI data Convergence VR Content Visualization Technique
※ AI-Helper는 부적절한 답변을 할 수 있습니다.