BEIJING, Oct. 5, 2023 /PRNewswire/ — WiMi Hologram Cloud Inc. (NASDAQ: WIMI) (“WiMi” or the “Company”), a leading global Hologram Augmented Reality (“AR”) Technology provider, today announced that a brain-computer interface (BCI) system control method based on electroencephalography (EEG) is proposed for low-speed unmanned aerial vehicle (UAV) target search. WiMi’s UAV-controlled brain-computer interface system is based on semi-autonomous navigation and motor imagery. This method consists of two main sub-systems: the decision-making sub-system and the semi-autonomous navigation sub-system. It enables continuous control of UAVs in the horizontal dimension for stable control and target search of low-speed UAVs.
The decision-making subsystem realizes the decision-making control of UAVs by analyzing the motor imagery (MI) EEG. MI feature extraction is performed using the improved cross-correlation (CC) method, which can effectively extract the information related to MI in EEG. Then, the logistic regression (LR) method is used to classify and make decisions on the MI features to guide the motion direction of the UAV. The decision-making subsystem provides the UAV with direct brain control capability, making the operation more intuitive and efficient. The decision subsystem of the method is implemented as follows:
EEG signal acquisition: In the decision-making subsystem, the operator’s EEG signals are first required to be acquired, which is accomplished using a non-invasive EEG device, a head-mounted EEG sensor or a dry electrode array. EEG records the electrical activity of the cerebral cortex, especially the electrical signals associated with MI.
MI feature extraction: After the EEG signals are acquired, the next step is to extract the MI features. MI refers to the corresponding electrical signals generated by the brain when a person imagines specific movements. These specific actions are usually related to the direction of the UAV’s movement, such as left, right, forward, etc.
Feature classification and decision making: After feature extraction, the decision-making subsystem uses a classification algorithm to recognize and differentiate different MI. The classification algorithms include logistic regression, support vector machine, and so on. Through these algorithms, the decision subsystem can determine the operator’s intention based on the recognized MI and guide the UAV’s motion accordingly.
Passing instructions to the UAV: Once the decision subsystem completes the decision of categorizing the MI, it passes the corresponding instructions to the UAV control system to guide the UAV to perform the corresponding actions. These instructions can be flight direction adjustments, speed changes, or other motion-related controls.
The key to the decision-making subsystem is to accurately recognize and classify the motion imagery so that the correct commands can be delivered to the UAV. This requires effective feature extraction and efficient classification algorithms for EEG signals to ensure the stability and reliability of the system.
In addition, the semi-autonomous navigation subsystem is another key component in WiMi’s UAV-controlled brain-computer interface system based on semi-autonomous navigation and motor imagery. The main goal of this subsystem is to enable autonomous obstacle avoidance for UAVs, ensuring that the direction of motion provided by the decision-making subsystem is feasible and safe. It enables the UAV to have a certain degree of autonomy and adaptability in unknown or complex environments by combining sensor data and environmental information. The semi-autonomous navigation subsystem is realized as follows:
Sensor data acquisition: In the semi-autonomous navigation subsystem, the UAV is equipped with a variety of sensors, such as LIDAR, camera, ultrasonic sensors and so on. These sensors can sense the environmental information around the UAV, including the location, distance, and size of obstacles.
Environment sensing and map construction: Through sensor data, the semi-autonomous navigation subsystem senses and analyzes the environment around the UAV. Based on the sensor data, the system can construct a virtual map of the environment, marking the location of obstacles and other important information on the map.
Path planning and obstacle avoidance decision-making: Based on the environment map, the semi-autonomous navigation subsystem utilizes a path planning algorithm to determine the flight path of the UAV. In this process, the system will consider the current position of the UAV, the target position, the distribution of obstacles in the environment and other factors to find a safe and efficient flight path. If obstacles are found on the path, the system will avoid these obstacles through the obstacle avoidance decision algorithm to keep the UAV flying on a safe trajectory.
Command delivery: Once the semi-autonomous navigation subsystem has completed the path planning and obstacle avoidance decisions, it delivers the appropriate commands to the UAV control system to direct the UAV to perform flight manoeuvres. These commands may include adjusting the flight direction, altitude, or speed to ensure that the UAV follows the planned path and avoids obstacles.
The semi-autonomous navigation subsystem is designed to improve the autonomy of the UAV, enabling the UAV to fly autonomously in complex and unknown environments and to automatically avoid obstacles when encountered to ensure flight safety. Compared with the fully autonomous navigation system, the semi-autonomous navigation system still relies on manually setting the target or area to a certain extent, but through the brain-computer interface technology, it allows the operator to guide the UAV’s movement more flexibly, which results in higher adaptability and flexibility in responding to complex tasks.
Of course, brain-computer interface technology is still under continuous development and improvement, and despite some progress, it still faces some challenges in practical applications, such as signal noise, individual differences and system response speed. Therefore, the performance improvement of the decision-making subsystem still requires continuous research and innovation. However, with the advancement of technology, the EEG-based decision subsystem is expected to become an important technology in the field of UAV control in the future, providing new possibilities for the intelligence and autonomy of UAVs.
WiMi’s UAV-controlled brain-computer interface system based on semi-autonomous navigation and motor imagery is a major advancement in the field of UAV development, which has revolutionized UAV handling. The successful development of this brain-computer interface system is of great significance to the development and popularization of UAV technology, making UAV operation simpler, more intuitive and more efficient. The application of this technology will promote the development of the drone industry and bring more convenience and possibilities to various fields. With the continuous expansion of UAV application scenarios, more intelligent and simplified control will help UAV technology play a greater role in agriculture, transportation, rescue, mapping and other fields.
About WIMI Hologram Cloud
WIMI Hologram Cloud, Inc. (NASDAQ:WIMI) is a holographic cloud comprehensive technical solution provider that focuses on professional areas including holographic AR automotive HUD software, 3D holographic pulse LiDAR, head-mounted light field holographic equipment, holographic semiconductor, holographic cloud software, holographic car navigation and others. Its services and holographic AR technologies include holographic AR automotive application, 3D holographic pulse LiDAR technology, holographic vision semiconductor technology, holographic software development, holographic AR advertising technology, holographic AR entertainment technology, holographic ARSDK payment, interactive holographic communication and other holographic AR technologies.
Safe Harbor Statements
This press release contains “forward-looking statements” within the Private Securities Litigation Reform Act of 1995. These forward-looking statements can be identified by terminology such as “will,” “expects,” “anticipates,” “future,” “intends,” “plans,” “believes,” “estimates,” and similar statements. Statements that are not historical facts, including statements about the Company’s beliefs and expectations, are forward-looking statements. Among other things, the business outlook and quotations from management in this press release and the Company’s strategic and operational plans contain forward−looking statements. The Company may also make written or oral forward−looking statements in its periodic reports to the US Securities and Exchange Commission (“SEC”) on Forms 20−F and 6−K, in its annual report to shareholders, in press releases, and other written materials, and in oral statements made by its officers, directors or employees to third parties. Forward-looking statements involve inherent risks and uncertainties. Several factors could cause actual results to differ materially from those contained in any forward−looking statement, including but not limited to the following: the Company’s goals and strategies; the Company’s future business development, financial condition, and results of operations; the expected growth of the AR holographic industry; and the Company’s expectations regarding demand for and market acceptance of its products and services.
Further information regarding these and other risks is included in the Company’s annual report on Form 20-F and the current report on Form 6-K and other documents filed with the SEC. All information provided in this press release is as of the date of this press release. The Company does not undertake any obligation to update any forward-looking statement except as required under applicable laws.