Rookie Robotics Team from Small UWS High School Joining the Giants in Robotics Competition – westsiderag.com

Sonia Benowitz is second from left. Credit: Annabelle Malschlin.

By Lisa Kava

Students from the newly formed robotics team at West End Secondary School (WESS), on West 61st Street, are competing in the New York City regionals of the FIRST Robotics Competition (FRC) from April 5-7. The event will take place at the Armory Track and Field Center in Washington Heights.

Founded in 2015, WESS has 500 students in its public high school. How did its novice robotics team secure a spot at FRC, alongside larger, well-established schools known for their STEM (Science, Technology, Engineering, and Math) programs, such as The Bronx High School of Science and Stuyvesant HIgh School?

The story starts in September 2023 when Upper West Sider Sonia Benowitz, 14, entered 9th grade at WESS. She had loved building LEGO robots in WESSs middle school robotics club, the community of the club and working with friends towards a common goal, she told West Side Rag in a phone interview. But a club did not exist for high school students. So she created one.

First, she approached her school principal who was supportive, she said. Benowitz then asked her middle school robotics coach, Noah Tom-Wong, to help run the club. Together with math teacher Evan Wheeler, who signed on as faculty leader, they began to spread the word. Soon the club had 25 members from 9th through 12th grade.

With Tom-Wongs guidance, the club members gathered wood, metal, and other supplies, ordering from vendors and robotics companies. They began to build a fully functional robot that could perform various tasks through remote wireless control. For example, one task is that the robot will use its arms that we built to pick up disks shaped like frisbees, Benowitz said, then throw the disks into a goal area.

Tom-Wong suggested the club enter the FIRST Robotics Competition, in which he had competed as a student at Stuyvesant High School. He volunteers frequently at FRC competitions. Robotics provides students [with] an incredibly unique environment where they can exert energy safely and with great impact, he told the Rag. The nature of the competition not only makes students good at STEM, but also [at] STEM communication.

But the $6,000 registration fee for the competition was not in the school budget. Thats when Samantha Alvarez Benowitz, Sonias mom, got involved. Researching, she learned about a rookie grant from NASA through its Robotics Alliance Project. The WESS team applied and got it. According to Alvarez Benowitz, they were the only school in New York City selected to receive the NASA grant, and one of five schools in New York state,

On the application we had to describe who was on our team, so I did a demographic survey and found that close to 70% of our team members are from historically underrepresented groups in STEM, including women, people of color, LGBTQ+, and students with disabilities, Sonia Benowitz said. They also wanted to know how we would get and pay for the supplies we needed to build the robot. The team has been fundraising through bake sales and other school functions. They also applied for grants, receiving $2,500 from the Gene Hass Foundation, an automotive company that sponsors STEM education.

At the competition the WESS team will be paired with two other teams to form a three-team alliance. Each team has its own robot which will be programmed to perform different tasks. The robots are judged and awarded points. We have to prepare our robot to complete as many tasks as possible, but also to complete tasks as well as possible, Benowitz explained. The WESS robot has been programmed to drive up a ramp onto a platform, like a car on a road, Alvarez Benowitz added. The ramp and platform are part of an existing set that all the teams use.

Working collaboratively is crucial, according to Tom-Wong. The work that comes out of these robotics teams can be very complex, he said. Its not unusual at competitions to see students from multiple teams working together to fix one teams problem. The top five teams will compete in the championships in Houston at the end of April.

Benowitz is excited about the competition. Our team has been working towards this moment for months, and we have all put in a lot of time and effort to get here. She is also a little nervous. I hope that our robot wont have any problems or break in the middle of a match.

Tom-Wong credits the rookie team for its perseverance. The group had to work with less stock and fewer tools [than most teams]. We also do not have the experience that the veteran teams have, he told the Rag. He is hopeful that WESS students will remain active in robotics in future years. Ultimately this group is unique in that they are pioneering the robotics program at WESS. They are laying the groundwork for a place where students can push themselves to learn and develop.

Subscribe to West Side Rags FREE email newsletterhere.

See more here:

Rookie Robotics Team from Small UWS High School Joining the Giants in Robotics Competition - westsiderag.com

Notus robotics team is headed to 2024 FIRST Championship – KTVB.com

Notus Jr/Sr High School robotics team of five students is headed to the 2024 FIRST Championship in Houston, Texas.

BOISE, Idaho A small robotics team from Notus Jr/Sr High School is living the classic underdog story after they qualified to compete at a world championship.

The team of five students will be heading to Houston, Texas to participate in the 2024 FIRST Championship. On Friday, KTVB spoke to the team advisor, Nick Forbes, who said this is the first year the program was introduced to the Notus. But that hasn't stopped them.

In March of 2024,team 9726 received the Rookie of the Year All-Star Award after competing in Boise. A few days later, they were invited to compete on the world stage.

According to the FIRST website, with every new season the game changes, and students will need to build a robot to achieve the goal. This year's game is called 'CRESCENDO.'

While FIRST's rules recommend a team should consist of 10 students, team 9726 won with half that. But, a student told KTVB it hasn't been without some challenges.

"It was entirely made from duct tape, zip ties, and just things that we had to find around," Ezekiel said. "There were sometimes things that we had to improvise through 3-D printings and other things. We're very proud of the work we've done."

He said their robot mainly plays defense, utilizing a wall, which helped them secure a spot at worlds.

The world championships in Houston kicks off on April 16.

See the latest news from around the Treasure Valley and the Gem State in our YouTube playlist:

HERE ARE MORE WAYS TO GET NEWS FROM KTVB:

Download the KTVB News Mobile App

Apple iOS: Click here to download

Google Play:Click here to download

Watch news reports for FREE on YouTube: KTVB YouTube channel

Stream Live for FREE on ROKU:Add the channel from the ROKU store or by searching 'KTVB'.

Stream Live for FREE on FIRE TV: Search KTVB and click Get to download.

View original post here:

Notus robotics team is headed to 2024 FIRST Championship - KTVB.com

The evolution of robotics: research and application progress of dental implant robotic systems | International Journal of … – Nature.com

Implantology is widely considered the preferred treatment for patients with partial or complete edentulous arches.34,35 The success of the surgery in achieving good esthetic and functional outcomes is directly related to correct and prosthetically-driven implant placement.36 Accurate implant placement is crucial to avoid potential complications such as excessive lateral forces, prosthetic misalignment, food impaction, secondary bone resorption, and peri-implantitis.37 Any deviation during the implant placement can result in damage to the surrounding blood vessels, nerves, and adjacent tooth roots and even cause sinus perforation.38 Therefore, preoperative planning must be implemented intraoperatively with utmost precision to ensure quality and minimize intraoperative and postoperative side effects.39

Currently, implant treatment approaches are as follows: Free-handed implant placement, Static computer-aided implant placement, and dynamic computer-aided implant placement. The widely used free-handed implant placement provides less predictable accuracy and depends on the surgeons experience and expertise.40 Deviation in implant placement is relatively large among surgeons with different levels of experience. When novice surgeons face complex cases, achieving satisfactory results can be challenging. A systematic review41 based on six clinical studies indicated that the ranges of deviation of the platform, apex, and angle from the planned position with free-handed implant placement were (1.250.62)mm(2.771.54)mm, (2.101.00)mm(2.911.52)mm, and 6.904.409.926.01, respectively. Static guides could only provide accurate guidance for the initial implantation position. However, it is difficult to precisely control the depth and angle of osteotomies.42 The lack of real-time feedback on drill positioning during surgery can limit the clinicians ability to obtain necessary information.42,43,44 Besides, surgical guides may also inhibit the cooling of the drills used for implant bed preparation, which may result in necrosis of the overheated bone. Moreover, the use of static guides is limited in patients with limited accessibility, especially for those with implants placed in the posterior area. Additionally, the use of guides cannot flexibly adjust the implant plan intraoperatively. With dynamic computer-aided implant placement, the positions of the patient and drills could be tracked in real-time and displayed on a computer screen along with the surgical plan, thus allowing the surgeon to adjust the drilling path if necessary. However, the surgeons may deviate from the plan or prepare beyond it without physical constraints. During surgery, the surgeon may focus more on the screen for visual information rather than the surgical site, which can lead to reduced tactile feedback.45 The results of a meta-analysis showed that the platform deviation, apex deviation, and angular deviation were 0.91mm (95% CI 0.791.03mm), 1.26mm (95% CI 1.141.38mm), and 3.25 (95% CI 2.843.66) respectively with the static computer-aided implant placement, and 1.28mm (95% CI 0.871.69mm), 1.68mm (95% CI 1.451.90mm), and 3.79 (95% CI 1.875.70), respectively, with dynamic computer-aided implant placement. The analysis results showed that both methods improved the accuracy compared to free-handed implant placement, but they still did not achieve ideal accuracy.46 Gwangho et al.47 believe that the key point of a surgical operation is still manually completed by surgeons, regardless of static guide or dynamic navigation, and the human factors (such as hand tremble, fatigue, and unskilled operation techniques) also affect the accuracy of implant placement.

Robotic-assisted implant surgery could provide accurate implant placement and help the surgeon control handpieces to avoid dangerous tool excursions during surgery.48 Furthermore, compared to manual calibration, registration, and surgery execution, automatic calibration, registration, and drilling using the dental implant robotic system reduces human error factors. This, in turn, helps avoid deviations caused by surgeons factors, thereby enhancing surgical accuracy, safety, success rates, and efficiency while also reducing patient trauma.7 With the continuous improvement of technology and reduction of costs, implant robotics are gradually becoming available for commercial use. Yomi (Neocis Inc., USA) has been approved by the Food and Drug Administration, while Yakebot (Yakebot Technology Co., Ltd., Beijing, China), Remebot (Baihui Weikang Technology Co., Ltd, Beijing, China), Cobot (Langyue dental surgery robot, Shecheng Co. Ltd., Shanghai, China), Theta (Hangzhou Jianjia robot Co., Ltd., Hangzhou, China), and Dcarer (Dcarer Medical Technology Co., Ltd, Suzhou, China) have been approved by the NMPA. Dencore (Lancet Robotics Co., Ltd., Hangzhou, China) is in the clinical trial stage in China.

Compared to other surgeries performed with general anesthesia, dental implant surgery can be completed under local anesthesia, with patients awake but unable to remain completely still throughout the entire procedure. Therefore, research related to dental implant robotic system, as one of the cutting-edge technologies, mainly focuses on acquiring intraoperative feedback information (including tactile and visual information), different surgical methods (automatic drilling and manual drilling), patient position following, and the simulation of surgeons tactile sensation.

The architecture of dental implant robotics primarily comprises the hardware utilized for surgical data acquisition and surgical execution (Fig. 4). Data acquisition involves perceiving, identifying, and understanding the surroundings and the information required for task execution through the encoders, tactile sensors, force sensors, and vision systems. Real-time information obtained also includes the robots surrounding environment, object positions, shapes, sizes, surface features, and other relevant information. The perception system assists the robot in comprehending its working environment and facilitates corresponding decision-making as well as actions.

The architecture of dental implant robotics

During the initial stage of research on implant robotics, owing to the lack of sensory systems, fiducial markers and corresponding algorithms were used to calculate the transformation relationship between the robots and the models coordinate system. The robot was able to determine the actual position through coordinate conversions. Dutreuil et al.49 proposed a new method for creating static guides on casts using robots based on the determined implant position. Subsequently, Boesecke et al.50 developed a surgical planning method using linear interpolation between start and end points, as well as intermediate points. The surgeon performed the osteotomies by holding the handpieces, with the robot guidance based on preoperatively determined implant position. Sun et al.51 and McKenzie et al.52 registered cone-beam computed tomography (CBCT) images, the robots coordinate system, and the patients position using a coordinate measuring machine, which facilitated the transformation of preoperative implant planning into intraoperative actions.

Neocis has developed a dental implant robot system called Yomi (Neocis Inc.)53 based on haptic perception and connects a mechanical joint measurement arm to the patients teeth to track their position. The joint encoder provides information on the drill position, while the haptic feedback of handpieces maneuvered by the surgeon constrains the direction and depth of implant placement.

Optical positioning is a commonly used localization method that offers high precision, a wide -field -of -view, and resistance to interference.54 This makes it capable of providing accurate surgical guidance for robotics. Yu et al.55 combined image-guided technology with robotic systems. They used a binocular camera to capture two images of the same target, extract pixel positions, and employ triangulation to obtain three-dimensional coordinates. This enabled perception of the relative positional relationship between the end-effector and the surrounding environment. Yeotikar et al.56 suggested mounting a camera on the end-effector of the robotic arm, positioned as close to the drill as possible. By aligning the cameras center with the drills line of sight at a specific height on the lower jaw surface, the cameras center accurately aligns with the drills position in a two-dimensional space at a fixed height from the lower jaw. This alignment guides the robotic arm in drilling through specific anatomical landmarks in the oral cavity. Yan et al.57 proposed that the use of eye-in-hand optical navigation systems during surgery may introduce errors when changing the handpiece at the end of the robotic arm. Additionally, owing to the narrow oral environment, customized markers may fall outside the cameras field of view when the robotic arm moves to certain positions.42 To tackle this problem, a dental implant robot system based on optical marker spatial registration and probe positioning strategies is designed. Zhao et al constructed a modular implant robotic system based on binocular visual navigation devices operating on the principles of visible light with eye-to-hand mode, allowing complete observation of markers and handpieces within the cameras field of view, thereby ensuring greater flexibility and stability.38,58

The dental implant robotics execution system comprises hardware such as motors, force sensors, actuators, controllers, and software components to perform tasks and actions during implant surgery. The system receives commands, controls the robots movements and behaviors, and executes the necessary tasks and actions. Presently, research on dental implant robotic systems primarily focuses on the mechanical arm structure and drilling methods.

The majority of dental implant robotic systems directly adopt serial-linked industrial robotic arms based on the successful application of industrial robots with the same robotic arm connection.59,60,61,62 These studies not only establish implant robot platforms to validate implant accuracy and assess the influence of implant angles, depths, and diameters on initial stability but also simulate chewing processes and prepare natural root-shaped osteotomies based on volume decomposition. Presently, most dental implant robots in research employ a single robotic arm for surgery. Lai et al.62 indicated that the stability of the handpieces during surgery and real-time feedback of patient movement are crucial factors affecting the accuracy of robot-assisted implant surgery. The former requires physical feedback, while the latter necessitates visual feedback. Hence, they employed a dual-arm robotic system where the main robotic arm was equipped with multi-axis force and torque sensors for performing osteotomies and implant placement. The auxiliary arm consisted of an infrared monocular probe used for visual system positioning to address visual occlusion issues arising from changes in arm angles during surgery.

The robots mentioned above use handpieces to execute osteotomies and implant placement. However, owing to limitations in patient mouth opening, performing osteotomies and placing implants in the posterior region can be challenging. To overcome the spatial constraints during osteotomies in implant surgery, Yuan et al.63 proposed a robot system based on earlier research which is laser-assisted tooth preparation. This system involves a non-contact ultra-short pulse laser for preparing osteotomies. The preliminary findings confirmed the feasibility of robotically controlling ultra-short pulse lasers for osteotomies, introducing a novel method for a non-contact dental implant robotic system.

It can be challenging for patients under local anesthesia to remain completely still during robot-assisted dental implant surgery.52,64,65,66,67 Any significant micromovement in the patients position can severely affect clinical surgical outcomes, such as surgical efficiency, implant placement accuracy compared to the planned position, and patient safety. Intraoperative movement may necessitate re-registration for certain dental implant robotic systems. In order to guarantee safety and accuracy during surgery, the robot must detect any movement in the patients position and promptly adjust the position of the robotic arm in real time. Yakebot uses binocular vision to monitor visual markers placed outside the patients mouth and at the end of the robotic arm. This captures motion information and calculates relative position errors. The robot control system utilizes preoperatively planned positions, visual and force feedback, and robot kinematic models to calculate optimal control commands for guiding the robotic arms micromovements and tracking the patients micromovements during drilling. As the osteotomies are performed to the planned depth, the robotic arm compensates for the patients displacement through the position following the function. The Yakebots visual system continuously monitors the patients head movement in real time and issues control commands every 0.008s. The robotic arm is capable of following the patients movements with a motion servo in just 0.2s, ensuring precise and timely positioning.

Robot-assisted dental implant surgery requires the expertise and tactile sense of a surgeon to ensure accurate implantation. Experienced surgeons can perceive bone density through the resistance they feel in their hands and adjust the force magnitude or direction accordingly. This ensures proper drilling along the planned path. However, robotic systems lack perception and control, which may result in a preference for the bone side with lower density. This can lead to inaccurate positioning compared to the planned implant position.61,62 Addressing this challenge, Li et al.68 established force-deformation compensation curves in the X, Y, and Z directions for the robots end-effector based on the visual and force servo systems of the autonomous dental robotic system, Yakebot. Subsequently, a corresponding force-deformation compensation strategy was formulated for this robot, thus proving the effectiveness and accuracy of force and visual servo control through in vitro experiments. The implementation of this mixed control mode, which integrates visual and force servo systems, has improved the robots accuracy in implantation and ability to handle complex bone structures. Based on force and visual servo control systems, Chen et al.69 have also explored the relationship between force sensing and the primary stability of implants placed using the Yakebot autonomous dental robotic system through an in vitro study. A significant correlation was found between Yakebots force sensing and the insertion torque of the implants. This correlation conforms to an interpretable mathematical model, which facilitates the predictable initial stability of the implants after placement.

During osteotomies with heat production (which is considered one of the leading causes of bone tissue injury), experienced surgeons could sense possible thermal exposure via their hand feeling. However, with free-handed implant placement surgery, it is challenging to perceive temperature changes during the surgical process and establish an effective temperature prediction model that relies solely on a surgeons tactile sense. Zhao et al.70, using the Yakebot robotic system, investigated the correlation between drilling-related mechanical data and heat production and established a clinically relevant surrogate for intraosseous temperature measurement using force/torque sensor-captured signals. They also established a real-time temperature prediction model based on real-time force sensor monitoring values. This model aims to effectively prevent the adverse effects of high temperatures on osseointegration, laying the foundation for the dental implant robotic system to autonomously control heat production and prevent bone damage during autonomous robotic implant surgery.

The innovative technologies mentioned above allow dental implant robotic systems to simulate the tactile sensation of a surgeon and even surpass the limitations of human experience. This advancement promises to address issues that free-handed implant placement techniques struggle to resolve. Moreover, this development indicates substantial progress and great potential for implantation.

The robotic assistant dental implant surgery consists of three steps: preoperative planning, intraoperative phase, and postoperative phase (Fig. 5). For preoperative planning, it is necessary to obtain digital intraoral casts and CBCT data from the patient, which are then imported into preoperative planning software for 3D reconstruction and planning implant placement. For single or multiple tooth gaps using implant robotic systems (except Yakebot),61,62,71,72 a universal registration device (such as the U-shaped tube) must be worn on the patients missing tooth site using a silicone impression material preoperatively to acquire CBCT data for registration. The software performs virtual placement of implant positions based on prosthetic and biological principles of implant surgery, taking into account the bone quality of the edentulous implant site to determine the drilling sequence, insertion depth of each drill, speed, and feed rate. For single or multiple tooth implants performed using Yakebot, there is no need for preoperative CBCT imaging with markers. However, it is necessary to design surgical accessories with registration holes, brackets for attaching visual markers, and devices for assisting mouth opening and suction within the software (Yakebot Technology Co., Ltd., Beijing, China). These accessories are manufactured using 3D printing technology.

Clinical workflow of robotic-assisted dental implant placement

For the intraoperative phase, the first step is preoperative registration and calibration. For Yakebot, the end-effector marker is mounted to the robotic arm, and the spatial positions are recorded under the optical tracker. The calibration plate with the positioning points is then assembled into the implant handpiece for drill tip calibration. Then, the registration probe is inserted in the registration holes of the jaw positioning plate in turn for spatial registration of the jaw marker and the jaw. Robot-assisted dental implant surgery usually does not require flapped surgery,73,74, yet bone grafting due to insufficient bone volume in a single edentulous space or cases of complete edentulism requiring alveolar ridge preparation may require elevation of flaps. For full-arch robot-assisted implant surgery, a personalized template with a positioning marker is required and should be fixed with metallic pins for undergoing an intraoperative CBCT examination, thus facilitating the robot and the jaws registration in the visual space and allowing the surgical robot to track the patients motion. The safe deployment of a robot from the surgical site is an essential principle for robot-assisted implant surgery. In the case of most robots, such as Yomi, the surgeon needs to hold the handpieces to control and supervise the robots movement in real time and stop the robotic arms movement in case of any accidents. With Yakebot, the entire surgery is performed under the surgeons supervision, and immediate instructions are sent in response to possible emergencies via a foot pedal. Additionally, the recording of the entrance and exit of the patients mouth ensures that the instruments would not damage the patients surrounding tissues. The postoperative phase aims at postoperative CBCT acquisition and accuracy measurement.

In clinical surgical practice, robots with varying levels of autonomy perform implant surgeries differently. According to the autonomy levels classified by Yang et al.6,8,33 for medical robots, commercial dental implant robotic systems (Table 2) currently operate at the level of robot assistance or task autonomy.

The robot-assistance dental implant robotic systems provide haptic,75 visual or combined visual and tactile guidance during dental implant surgery.46,76,77 Throughout the procedure, surgeons must maneuver handpieces attached to the robotic guidance arm and apply light force to prepare osteotomies.62 The robotic arm constrains the 3D space of the drill as defined by the virtual plan, enabling surgeons to move the end of the mechanical arm horizontally or adjust its movement speed. However, during immediate implant placement or full-arch implant surgery, both surgeons and robots may struggle to accurately perceive poor bone quality, which should prompt adjustments at the time of implant placement. This can lead to incorrect final implant positions compared to the planned locations.

The task-autonomous dental implant robotic systems can autonomously perform partial surgical procedures, such as adjusting the position of the handpiece to the planned position and preparing the implant bed at a predetermined speed according to the pre-operative implant plan, and surgeons should send instructions, monitor the robots operation, and perform partial interventions as needed. For example, the Remebot77,78 requires surgeons to drag the robotic arm into and out of the mouth during surgery, and the robot automatically performs osteotomies or places implants according to planned positions under the surgeons surveillance. The autonomous dental implant robot system, Yakebot,73,79,80 can accurately reach the implant site and complete operations such as implant bed preparation and placement during surgery. It can be controlled by the surgeon using foot pedals and automatically stops drilling after reaching the termination position before returning to the initial position. Throughout the entire process, surgeons only need to send commands to the robot using foot pedals.

Figure 6 shows the results of accuracy in vitro, in vivo, and clinical studies on robot-assisted implant surgery.20,46,48,55,62,64,67,68,69,70,71,72,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89 The results suggest that platform and apex deviation values are consistent across different studies. However, there are significant variations in angular deviations among different studies, which may be attributed to differences in the perception and responsiveness to bone quality variances among different robotic systems. Therefore, future development should focus on enhancing the autonomy of implant robots and improving their ability to recognize and respond to complex bone structures.

Accuracy reported in studies on robotic-assisted implant placement

Xu et al.77 conducted a phantom experimental study comparing the implant placement accuracy in three levels of dental implant robotics, namely passive robot (Dcarer, level 1), semi-active robot (Remebot, level 2), and active robot (Yakebot, level 2) (Fig. 7). The study found that active robot had the lowest deviations at the platform and apex of the planned and actual implant positions, While the semi-active robot also had the lowest angular deviations. Chen et al.46 and Jia et al.79 conducted clinical trials of robotic implant surgery in partially edentulous patients using a semi-active dental implant robotic system (level 1) and an autonomous dental implant robot (level 2). The deviations of the implant platform, apex, and angle were (0.530.23)mm/(0.430.18)mm, (0.530.24)mm/(0.560.18)mm and 2.811.13/1.480.59, respectively. These results consistently confirmed that robotic systems can achieve higher implant accuracy than static guidance and that there is no significant correlation between accuracy and implant site (such as anterior or posterior site). The platform and angle deviation of autonomous dental implant robots were smaller than those of semi-active dental implant robotic systems. Li et al.73 reported the use of the autonomous dental implant robot (level 2) to complete the placement of two adjacent implants with immediate postoperative restoration. The interim prosthesis fabricated prior to implant placement was seated without any adjustment, and no adverse reactions occurred during the operation.

Comparison of accuracy of dental implant robotics with different levels of autonomy (phantom experiments) (*P<0.05, **P<0.01, ***P<0.001)

Bolding et al.,53 Li et al.,20 Jia et al.,79 and Xie et al.90 used dental implant robots to conduct clinical trials in full-arch implant surgery with five or six implants placed in each jaw. The deviations of implant platform, apex, and angle are shown in Fig. 8. The haptic dental implant robot (level 1) used by Bolding et al.,53 achieved more deviations compared to other studies that used semi-active (level 1) or active robots (level 2). As its handpiece must be maneuvered by the surgeon, human errors such as surgeon fatigue may not be avoided. Owing to the parallel common implant placement paths between various implant abutments, prefabricated temporary dentures could be seated smoothly, and some patients wore temporary complete dentures immediately after surgery. These results indicate that robotic systems can accurately locate and perform implant placement during surgery.

Comparison of accuracy in robotic-assisted full-arch implant placement

As there are relatively few studies of implant robots in clinical applications, Tak acs et al.91 conducted a meta-analysis under in vitro conditions with free-handed, static-guided, dynamic navigated, and robotic-assisted implant placements, as shown in Fig. 9. It was found that, compared to free-handed, static guided and dynamic navigated implant placements, robotic-assisted implant placements have more advantages in terms of accuracy. However, in vitro studies cannot fully simulate the patients oral condition and bone quality. Recent clinical studies89,92,93 have shown a lower deviation in robotic-assisted implant placements compared to static-guided and dynamic-navigated implant placements. Common reasons for deviations in static-guided and dynamic-navigated implant placements include the following: deflection caused by hand tremors due to dense bone during surgery, surgeons experience, and other human factors. Larger clinical studies will be needed in the future to evaluate the differences between robotic and conventional surgical approaches and to provide guidance for the further development and refinement of robotic techniques.

Comparison of accuracy of free-handed, static, dynamic, and robotic-assisted implant placement. (FHIP free-hand implant placement, SCAIP static computer-aided implant placement, DCAIP dynamic computer-aided implant placement, RAIP robot-assisted implant placement)

For the long-term follow-up performance of robotic systems used in dental implant procedures, none of the comparative studies was longer than a year. One 1-year prospective clinical study by Xie et al.90 showed that the peri-implant tissues after robot-assisted full arch surgery at 1-year visit remained stable. There is little evidence indicating clinical outcomes especially for patient-reported outcomes. A more detailed clinical assessment should be included for further research.

Although robotic-assisted dental implant surgery can improve accuracy and treatment quality,94 it involves complex registration, calibration, and verification procedures that prolong the duration of surgery. These tedious processes may introduce new errors,61 and lower work efficiency, especially in single tooth implant placement62 that could extend visit times and affect patient satisfaction.62 Besides, surgeons are required to undergo additional training to familiarize themselves with the robotic system.87

During implantation, the drill tips at the end of the robotic arms cannot be tilted, and this can increase the difficulty of using robots in posterior sections with limited occlusal space.61,62 In addition, currently available marker systems require patients to wear additional devices to hold the marker in place. If these markers are contaminated or obstructed by blood, the visual system may not be able to detect them, limiting surgical maneuverability to some extent. During immediate implant placement or in cases of poor bone quality in the implant site, the drill tips may deviate towards the tooth sockets or areas of lower bone density, seriously affecting surgical precision.

Currently, only one study has developed a corresponding force-deformation compensation strategy for robots,68 but clinical validation is still lacking. Additionally, the dental implant robotic system, along with other dental implant robots developed for prosthetics, endodontics, and orthodontics, is currently single-functional. Multi-functional robots are required for performing various dental treatments.

Despite the enormous potential of robotic systems in the medical field, similar to the development of computer-aided design/computer-aided manufacturing technology, introducing and applying this technology faces multiple challenges in the initial stages. The high cost of robotic equipment may limit its promotion and application in certain regions or medical institutions. Surgeons require specialized technical training before operating robotic systems, which translates to additional training costs and time investment.95

Go here to see the original:

The evolution of robotics: research and application progress of dental implant robotic systems | International Journal of ... - Nature.com

Nvidia Announces Robotics-Oriented AI Foundational Model – InfoQ.com

At its recent GTC 2024 event, Nvidia announced a new foundational model to build intelligent humanoid robots. Dubbed GR00T, short for Generalist Robot 00 Technology, the model will understand natural language and be able to observe human actions and emulate human movements.

According to Nvidia CEO Jensen Huang, creating intelligent humanoid robots is the most exciting AI problem today. GR00T robots will learn coordination and other skills by observing humans to be able to navigate, adapt and interact with the real world. At the conference keynote, Huang showed several demos of what GR00T is capable of at the moment, including some robots performing a number of tasks.

The GR00T model takes multimodal instructions and past interactions as input and produces the actions for the robot to execute.

To power GR00T, Nvidia has created a new family of systems-on-modules, called Jetson Thor, using the latest Blackwell graphics architecture from the company and able to provide 800 teraflops (TFLOPS) of eight-bit floating-point compute.

At the foundation of GR00T lies Nvidia Isaac Sim, an extensible, Omniverse-based platform for robotics simulation aimed to improve the way AI-based robots are designed and tested, according to the company.

To train GR00T at scale, Nvidia has also built a new compute orchestration platform, Nvidia Osmo, aimed at coordinating training and inference across several Nvidia systems, including DGX systems for training, OVX systems for simulation, and IGX and AGX systems for hardware-in-the-loop validation.

Embodied AI models require massive amounts of real and synthetic data. The new Isaac Lab is a GPU-accelerated, lightweight, performance-optimized application built on Isaac Sim specifically for running thousands of parallel simulations for robot learning.

While GR00T is still very much a work in progress, Nvidia has announced two of the building blocks that will compose it, as part of the Isaac platform: a foundational model for robotic-arm manipulators, called Isaac Manipulator, and a collection of hardware-accelerated packages for visual AI and perception, the Isaac Perceptor.

According to Nvidia, Isaac Manipulator

provides up to an 80x speedup in path planning and zero-shot perception increases efficiency and throughput, enabling developers to automate a greater number of new robotic tasks.

On the other hand, Isaac Perceptor aims to improve efficiency and safety in environments where autonomous mobile robots are used, such as in manufacturing and fulfillment operations.

Both the Manipulator and the Perceptor should become available in the next quarter, says Huang.

On a related note, Nvidia has joined the Open Source Robotics Alliance, which aims to provide financial and industry support to the Robot Operating System (ROS). The company has not detailed if they plan to use ROS for GR00T robots, though.

Link:

Nvidia Announces Robotics-Oriented AI Foundational Model - InfoQ.com

Google giving $500K to expand robotics and AI education programs in Washington state – GeekWire

U.S. Congresswoman Suzan DelBene joins Googles Paco Galanes, Kirkland site lead and engineering director, right, with students working on robotics projects at Finn Hill Middle School in Kirkland, Wash., on Friday. (Google Photo)

Googles philanthropic arm is giving a $500,000 grant to expand access to robotics and artificial intelligence education programs across Washington state middle schools, the company announced Friday.

In partnership with the non-profits Robotics Education & Competition Foundation (RECF) and For InSpiration and Recognition of Science and Technology (FIRST), Google.org said the grant would support 1,234 new or existing robotics clubs in Washington and reach more than 8,900 students over the course of three years.

The announcement came during an event Friday morning at Finn Hill Middle School in Kirkland, Wash., where students put together robots and were introduced to hands-on STEM tools by Google employee volunteers. The Alphabet-owned tech giant has a sizable workforce in Kirkland and the greater Seattle area.

U.S. Congresswoman Suzan DelBene (D-WA) attended the event and said the investment was key to educating future leaders in robotics and AI.

Programs like these give young people the opportunity to innovate, build new skills, and open bright new pathways for their future, DelBene said.

The funding is part of a $10 million initiative launched by Google.org to fund FIRST and RECF in communities where the company has a presence.

Continue reading here:

Google giving $500K to expand robotics and AI education programs in Washington state - GeekWire

Rainbow Robotics unveils RB-Y1 wheeled, two armed robot – Robot Report

Listen to this article

RB-Y1 mounts a humanoid-type double-arm robot on a wheeled, high-speed mobile base. | Credit: Rainbow Robotics

Rainbow Robotics announced the release of detailed specifications for the new RB-Y1 mobile robot. The company recently signed a memorandum of understanding with Schaeffler Group and the Korea Electronics Technology Institute, or KETI, to co-develop the RB-Y1 and other mobile manipulators in Korea.

The past year has seen an explosion in the growth of humanoids, where most of the robots are bipedal and walk on two legs. Likewise, there have been many recent releases of mobile manipulators, or autonomous mobile robots (AMRs) with a single arm manipulator on board the vehicle.

The RB-Y1 is a form of wheeled robot base with a humanoid double-arm robot on top. Rainbow Robotics robot uses that base to maneuver through its environment and position the arms for manipulation tasks. The company called this configuration a bimanual manipulator.

To perform various and complex tasks, both arms on the RB-Y1 are equipped with seven degrees of freedom and consist of a single torso with six axes that can move the body. With this kinematic configuration, it is possible to move more than 50 cm (19.7 in.) vertically, making it possible to perform tasks at various heights.

Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.

The maximum driving speed for the RB-Y1 is 2,500 mm/s (5.6 mph), and the company is claiming that the robot can accelerate quickly and turn at higher speeds by leaning the body into the turn. To avoid toppling while in motion, the center of gravity can be safely controlled by dynamically changing the height of the body.

The dimensions of the robots are 600 x 690 x 1,400 mm (23.6 x 27.2 x 55.1 in.), and the unit weighs 131 kg (288.8 lb.). The manipulators can each lift 3 kg (6.61 lb.).

At press time, there are not a lot of details about the robots ability to function using artificial intelligence, and one early video showed it working via teleoperation. Its likely that the demonstrations in the video below are with remote operators.

However, Rainbow Robotics clearly has the goal of making its robot fully autonomous in the future, as more research, development, training, and simulation are completed.

These days, when Generative AI such as ChatGPT and Figure is a hot topic in the robot industry, we have developed a bimanual mobile manipulator in line with the AI era, stated a company spokesperson. We hope that the platform will overcome the limitations of existing industrial robots and be used in many industrial sites.

Original post:

Rainbow Robotics unveils RB-Y1 wheeled, two armed robot - Robot Report

Comau and Leonardo Want to Elevate Aeronautical Structure Inspection with Cognitive Robotics – DirectIndustry e-Magazine

Robotic company Comau and aerospace company Leonardo are currently testing a self-adaptive robotic solution to enable autonomous inspection of helicopter blades. This could enhance quality inspections and offer greater flexibility without sacrificing precision or repeatability. At a time when the aerospace industry demands faster processes, better control, and higher quality, it requires a new generation of advanced automation. We contacted Simone Panicucci, Head of Cognitive Robotics at Comauto know more about this solution and how it could benefit the aerospace industry.

The increasing demand for faster processes in the aerospace industry requires to automate complex processes that, until recently, could only be manual. When it comes to testing essential structures such as helicopter blades, the potential benefits of automation increase exponentially. Robotic inspection ensures precision and efficiency. It also ensures standardization and full compliance with the testing process by objectively executing each assigned task.

To meet the industrys needs, Comau and Leonardo have been testing an intelligent inspection solution based on Comaus cognitive robotics, on-site in Anagni, Italy to inspect helicopter blades measuring up to 7 meters.

The solution relies on a combination of self-adaptive robotics, advanced vision systems, and artificial intelligence. Comaus intelligent robot can autonomously perform hammer tests and multispectral surface inspections on the entire nonlinear blade to measure and verify structural integrity, with a granularity exceeding thousands of points.

The robot perceives and comprehends its environment, makes calculated decisions, and intuitively optimizes the entire inspection process.

They will then test the system on another site to enhance MRO (maintenance, repair, and overhaul) service capabilities.

We contacted Simone Panicucci, Head of Cognitive Robotics at Comau who gave us more details about this collaboration.

Simone Panicucci: The collaboration grew out of Leonardos need to ensure advanced autonomous inspection of highly critical aviation infrastructure using cognitive robotics. The two companies are collaborating to develop and test a powerful, self-adaptive robotic solution to autonomously inspect helicopter blades up to 7 meters in length. Aerospace is not a sector that is used to automation yet. The high variability and the low volumes act as constraints toward a deep automation adoption. Cognitive robotics solutions are thus a key enabler to provide the automation benefits (such as process engineering, repeatability, and traceability) even with heterogeneous products and unstructured environments and Comau is leading the creation of AI-based, custom robotic solutions.

Simone Panicucci: The solution developed is a self-adaptive and efficient machine to inspect really large helicopter blades. It includes a visual inspection as well as a tapping test. It consists in physically stimulating the blade surface with an ad-hoc little hammer to recognize from the consequent sound if there is any issue in the blades internal structure. Jointly, both inspections require testing tens of thousands of points on the overall blade.

The robot can sense the environment, and locate the blade in the space with an accuracy below 10 mm. It can also understand potential objects in the scene the robot may collide with. And it can calculate at run time the optimal and collision-free path planning to complete the task.

Simone Panicucci: The solution is provided with a 3D camera whose input is elaborated by a vision system to merge multiple acquisitions, post-process the scene acquired, and then localize both the helicopter blade as well as potential obstacles.

Simone Panicucci: All the movements performed by the robot are calculated once the scene has been sensed, which means that no robot movement has been offline calculated. Additional sensors have been added to the robot flange as an external and independent system to avoid damaging the blade.

Simone Panicucci: Today, helicopter blade inspection is done manually. The provided solution offers greater accuracy and efficiency, ensuring standardization and full compliance with the testing process by objectively completing each assigned task. Operators now program the machine, codifying their experience through a simplified user interface. The machine can work for hours without intervention, providing an accurate report summarizing critical points at the end.

Simone Panicucci: The flexibility is given by the fact that the solution is able to deal with different helicopter blade models and potentially even different helicopter components. In addition, accuracy and repeatability are typical automation takeaways, now even improved thanks to vision system adoption. Increased quality is due to the fact that the operator can now focus on the activity where he/she brings most of the value, the defect detection and confirmation, instead of mechanically performing the inspection.

Simone Panicucci: Operator knowledge is always at the center. Leonardo personnel keep the final word regarding the helicopter blade status certification as well as any point inspected. The automation solution aims to alleviate operators from the repetitive task of manually inspecting tens of thousands of points on the helicopter surface. After hours of signal recording, the solution generates a comprehensive report summarizing the results of AI-based anomaly detection. The industrialized solution ensures repeatability, reliability, and traceability, covering and accurately performing the task.

Simone Panicucci: The solution is CE-certified and incorporates both physical and virtual safety measures. Physical barriers and safety lasers create a secure perimeter, halting operations instantly in the event of unexpected human intrusion. Furthermore, the solution ensures safe loading and unloading of helicopter blades and verifies proper positioning by requiring operators to activate safety keys from a distance of approximately 10 meters.

Simone Panicucci: This solution demonstrates that product heterogeneity and low volumes, typical of the aerospace sector, no longer constrain automation adoption. Comaus cognitive robotics approach enables the delivery of effectiveness, quality, and repeatability even in unstructured environments and with low volumes. It easily adapts to different helicopter models and blades. Executing a process like the tapping test necessitated defining requirements and process engineering. This involved defining the material of the tapping tool, as well as the angle and force to apply. Additionally, all labeled data, whether automatic or manual, are now tracked and recorded, facilitating the creation of an extensive knowledge base to train deep learning models.

Simone Panicucci: Leonardo has been conducting tests on this solution as part of a technology demonstration. This technology holds potential benefits for both Leonardo and its customers. It could standardize inspection processes globally and may be offered or deployed to customers with numerous helicopters requiring inspection.

Simone Panicucci: The specific solution could obviously be extended to other inspections in the helicopter sectors as well as the avionics. But it is worth mentioning that from the technology point of view, the software pipeline, as well as the localization and optimal path planning may be easily applicable in other inspection activities as well as manufacturing or even continuous processes, like welding.

Simone Panicucci: The next steps involve thorough testing of the automation solution at another Leonardo Helicopters plant. This process will contribute to ongoing improvements in the knowledge base and, consequently, the deep learning algorithm for anomaly recognition.

Continued here:

Comau and Leonardo Want to Elevate Aeronautical Structure Inspection with Cognitive Robotics - DirectIndustry e-Magazine

Mercer University hosts GeorgiaFirst Robotics competition – 13WMAZ.com

GeorgiaFirst Robotics is a STEM-based organization for youth across the state.

MACON, Ga. On Saturday, high school students put their wits to the test during the GeorgiaFirst Robitics District Qualifier.

GeorgiaFirst Robotics is a STEM-based organization for youth across the state. They partnered with FIRST in South Carolina and Mercer University to host the competition at Hawkins Arena.

The Peachtree District Championship is the state championship of high school-level robotics and challenges teams of students to build robots to compete in head-to-head challenges. The top 50 teams from the Peachtree District will receive an invitation to compete against the best in the district.

At the competition on Saturday, there were 50 teams present and around 200 students. Another thousand people attended as spectators.

Event organizers describe the competition as combining the excitement of sport with the rigors of science and technology." They say its the ultimate sport for the mind, and that high-school student participants call it the hardest fun youll ever have.

"Students that are here are not only learning the technical skills in engineering, but they also have skills that they're learning through collaboration through teamwork through communication. Many of them are running social media, they learn how to run a business," Assistant Vice President for Enrollment Dr. Kelly Holloway said.

Programs like GeorgiaFirst Robotics aim to teach technical skills in engineering, teamwork and communication.

During the competition, three teams went head-to-head with another three teams.

The robots needed to complete obstacle courses like running over big donut-like tubes and throwing them into goals. During the first 20 seconds of the match, the robots are on their own doing the competition through code, and then afterward they are remote-controlled by the students.

Organizers say people might be surprised by what it takes to get these robots up and running and the career paths it opens up for students.

WHAT OTHER PEOPLE ARE READING:

View original post here:

Mercer University hosts GeorgiaFirst Robotics competition - 13WMAZ.com

NEURA and Omron Robotics partner to offer cognitive factory automation – Robot Report

Listen to this article

NEURA has developed cognitive robots in a variety of form factors. Source: NEURA Robotics

Talk about combining robotics and artificial intelligence is all the rage, but some convergence is already maturing. NEURA Robotics GmbH and Omron Robotics and Safety Technologies Inc. today announced a strategic partnership to introduce cognitive robotics into manufacturing.

By pooling our sensor and AI technologies and expertise into an ultimate platform approach, we will significantly shape the future of the manufacturing industry and set new standards, stated David Reger, founder and CEO of NEURA Robotics.

Reger founded the company in 2019 with the intention of combining sensors and AI with robotics components for a platform for app development similar to that of smartphones. The NEURAverse offers flexibility and cost efficiency in automation, according to the company.

Unlike traditional industrial robots, cognitive robots have the ability to learn from their environment, make decisions autonomously, and adapt to dynamic production scenarios, said Metzingen, Germany-based NEURA. This opens new application possibilities including intricate assembly tasks, detailed quality inspections, and adaptive material handling processes.

We see NEURAs cognitive technologies as a compelling growth opportunity for industrial robotics, added Olivier Welker, president and CEO of Omron Robotics and Safety Technologies. By combining NEURAs innovative solutions with Omrons global reach and automation portfolio, we will provide customers new ways to increase safety, productivity, and flexibility in their operations.

Pleasanton, Calif.-based Omron Robotics is a subsidiary of OMRON Corp. focusing on automation and safety sensing. It designs and manufactures industrial, collaborative, and mobile robots for various industries.

Weve known Omron for quite some time, and even before I started NEURA, we had talked about collaborating, Reger told The Robot Report. Theyve tested our products, and weve worked together on how to benefit both sides.

We have the cognitive platform, and theyre one of the biggest sensor, controllers, and safety systems providers, he added. This collaboration will integrate our cognitive abilities and NEURAverse with their sensors for a plug-and-play solution, which everyone is working toward.

Omron Robotics Olivier Welker and NEURAs David Reger celebrate their partnership. Source: NEURA

When asked whether NEURA and Omron Robotics partnership is mainly focused on market access, Reger replied, Its not just the sales channel there are no really big limits. From both sides, there will be add-ons.

Rather than see each other as competitors, NEURA and Omron Robotics are working to make robots easier to use, he explained.

As a billion-dollar company, it could have told our startup what it wanted, but Omron is different, said Reger. I felt we got a lot of respect from Olivier and everyone in that organization. It wont be a one-sided thing; it will be just Lets help each other do something great. Thats what were feeling every day since weve been working together. Now we can start talking about it.

NEURA has also been looking at mobile manipulation and humanoid robots, but adding capabilities to industrial automation is the low-hanging fruit, where small changes can have a huge effect, said Reger. A lot of things for humanoids have not yet been solved.

I would love to just work on household robots, but the best way to get there is to use the synergy between industrial robotics and the household market, he noted. Our MAiRA, for example, is a cognitive robot able to scan an environment and from an idle state pick any known or unknown objects.

MAiRA cognitive robot on MAV mobile base. Source: NEURA Robotics

NEURA and Omron Robotics promise to make robots easier to use, helping overall adoption, Reger said.

A big warehouse company out of the U.S. is claiming that its already using more than 1 million robots, but at the same time, Im sure theyd love to use many more robots, he said. Its also in the transformation from a niche market into a mass market. We see thats currently only possible if you somehow control the environment.

Its not just putting all the sensors inside the robot, which we were first to do, and saying, OK, now were able to interact with a human and also pick objects,' said Reger. Imagine there are external sensors, but how do you calibrate them? To make everything plug and play, you need new interfaces, which means collaboration with big players like Omron that provide a lot of sensors for the automation market.

NEURA has developed its own sensors and explored the balance of putting processing in the cloud versus the edge. To make its platform as popular with developers as that of Apple, however, the company needs the support of partners like Omron, he said.

Reger also mentioned NEURAs partnership with Kawasaki, announced last year, in which Kawasaki offers the LARA CL series cobot with its portfolio. Both collaborations are incredibly important for NEURA and will soon make sense to everyone, he said.

Reger will be presenting a session on Developing Cognitive Robotics Systems at 2:45 p.m. EDT on Wednesday, May 1, Day 1 of the Robotics Summit & Expo. The event will be at the Boston Convention and Exhibition Center, and registration is now open.

Ill be talking about making robots cognitive to enable AI to be useful to humanity instead of competing with us, he said. AI is making great steps, but if you look at what its doing, like drawing pictures or writing stories these are things that Id love to do but dont have the time for. But if I ask, lets say, AI to take out the garbage or show it a picture of garbage, it can tell me how to do it, but its simply not able to do something about it yet.

NEURA is watching humanoid development but is focusing on integrating cognitive robotics with sensing and wearables as it expands in the U.S., said Reger. The company is planning for facilities in Detroit, Boston, and elsewhere, and it is looking for leadership team members as well as application developers and engineers.

We dont just want a sales office, but also production in the U.S., he said. We have 220 people in Germany I just welcomed 15 new people who joined NEURA and are starting to build our U.S. team. In the past several months, weve gone with only European and American investors, and were looking at the Japanese market. The U.S. is now open to innovation, and its an exciting time for us to come.

Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.

Read this article:

NEURA and Omron Robotics partner to offer cognitive factory automation - Robot Report

Pioneering Emotional Intelligence in Robotics: The Rise of Emo – yTech

In a breakthrough for robotics and artificial intelligence (AI), a robot named Emo stands as a testament to technological ingenuity, possessing the capability to learn and replicate human emotional expressions. This development marks a significant stride in narrowing the emotional divide between humans and machines, potentially reshaping the way we interact with robots in a multitude of sectors.

Core Innovation Behind Emos Emotional Acuity Emos core innovation lies in its dual neural network architecture, which empowers the robot with unprecedented emotional intelligence. By utilizing advanced cameras and motor systems, Emo can observe and assimilate human expressions. Over time, its capacity to respond in contextually relevant ways improves, making human-robot interactions increasingly natural and seamless.

Professor Hod Lipson and his team are the visionaries behind Emos conceptualization and realization. Their work paves the way for a future where robots can forge emotional bonds with humans, setting a new benchmark in social robotics.

Potential for Transformative Impact Across Industries The ripple effect of Emos introduction is vast, with implications for customer service, therapy, elder care, and education. It foretells significant growth within the social robotics market, with affordable manufacturing techniques on the horizon and analysts predicting robust market development bolstered by the integration of empathetic robots in everyday life.

Navigating the Ethical Considerations of Advanced Robotics Notwithstanding the advancements and promises of Emos technology, ethical questions loom. Issues surrounding emotional authenticity, privacy, and employment disruptions accentuate the need for conscientious deployment of such robots. This underscores the importance of engaging with ethics-focused organizations like IEEE and ACM, which strive to establish standards that balance technological progress with societal well-being.

In summary, Emo represents a fusion of AI and emotional perception, potentially revolutionizing human-robot interaction and industry practices. Its advent warrants thoughtful consideration of the ethical landscape as we embrace the age of emotionally intelligent machines. The robotic companions evolution and the industrys path forward will be characterized by ethical vigilance, research brilliance, and insightful analysis, jointly shaping the role of robotics in our future.

Expanding the Market Forecast for Emotionally Intelligent Robots The global market for social and emotional robotics is expected to experience substantial growth over the coming years. According to a report by MarketsandMarkets, the social robot market, in particular, is expected to rise from USD 918 million in the current scenarios to over USD 3,900 million by the next decade, expanding at a CAGR of 14.5% during the forecast period. This growth is fueled by increasing adoption in sectors such as personal assistance, education, and healthcare, where they can perform tasks ranging from companionship to assisting with cognitive therapy and rehabilitation.

The emergence of robots like Emo will spur further research and development, reducing costs and enhancing functionalities. This will likely attract investment and increase the accessibility of these robots, thus making them more commonplace in both consumer and commercial environments.

Challenges and Controversies Within the Robotics Industry Despite these promising market forecasts, the robotics industry faces challenges and controversies that could impact the emotional intelligence sector. One of the primary concerns is job displacement, as robots become capable of performing tasks typically reserved for human workers. This could lead to significant shifts in the labor market and necessitate retraining for those whose jobs are affected.

Another key consideration is data privacy and security, especially with robots that can collect and analyze personal emotional data. Ensuring that this information is used responsibly and securely is paramount to maintaining public trust.

For research, development, and the establishment of standards in robotics, resources can be found through organizations such as IEEE and ACM.

Summary and Industry Outlook In conclusion, Emo exemplifies the potential for emotion recognition in robotics to drive innovation across various sectors. The social and emotional robot industry is anticipated to flourish, bringing about advancements in how these machines are integrated into our daily lives. As the industry progresses, it will be essential to monitor market dynamics, foster ethical practices, and encourage responsible innovation, thereby ensuring that the evolution of robots like Emo contributes positively to society.

The success of products like Emo and the industrys trajectory will heavily rely on striking a balance between innovation and the humane and ethical application of technology. Thought leaders, developers, and policymakers will need to collaborate to navigate these challenges successfully. The trends in the robotics industry point towards a future where emotionally intelligent machines become an integral part of the fabric of society, enhancing human life while addressing the ethical implications of such profound technological integration.

Leokadia Gogulska is an emerging figure in the field of environmental technology, known for her groundbreaking work in developing sustainable urban infrastructure solutions. Her research focuses on integrating green technologies in urban planning, aiming to reduce environmental impact while enhancing livability in cities. Gogulskas innovative approaches to renewable energy usage, waste management, and eco-friendly transportation systems have garnered attention for their practicality and effectiveness. Her contributions are increasingly influential in shaping policies and practices towards more sustainable and resilient urban environments.

Continued here:

Pioneering Emotional Intelligence in Robotics: The Rise of Emo - yTech

Ranching of tomorrow: Smooth Ag bringing robotics to ranchers with autonomous Ranch Rover – Graham Leader

By automating the cattle feeding process the Graham-based company Smooth Ag is looking to bring the innovation of robotics to ranchers through its autonomous Ranch Rover vehicle.

The Ranch Rover was the creation of fourth-generation rancher River McTasney who had the agricultural lifestyle ingrained in his bones at a young age growing up on a 3,000-acre ranch while tending 120 head of cattle.

I went to school at Paint Creek High School, an agriculture community. Most of us kids there grew up working on our own stuff. We have a mechanical skill set from that lifestyle that really equips us with the problem solving skills that I think a lot of people from outside of the rural community may not quite get, McTasney said. ...So that problem solving skill set really helped with this later on down the road.

Following high school, McTasney attended Texas A&M University and graduated in 2018 with a degree in construction management. He worked for a year in College Station in sales for an HVAC company before deciding he wanted a break and moved back to his family ranch.

I was feeding cows and I was like, Theres got to be a better way to do this. ...Being one of the only able-bodied people on the ranch to do other stuff, there was other stuff I needed to get done instead of spending three hours a day in the feed pickup, he said. I started tinkering with different ideas and finally decided that a mobile platform, just like a feed pickup without the driver, was the best way to do it.

McTasney learned to code with the intention of making the dream of the Ranch Rover a reality. Over the next two years he built a conceptual machine on an old pickup truck frame and eventually moved up to the current prototype.

It has a 4,000-pound payload. Its GPS waypoint navigation fused with machine vision, so its completely autonomous. They have the ability to set routes and then with those routes set individual feed missions... and those are on a timer, he said. You can schedule them however you like, you can pick your feed locations (and) pick how much youre going to feed at each of those feed locations.

The rover has data-driven decision making which McTasney said can provide owners information for planning.

Theres a lot of data collection involved as well thats going to be extremely valuable. With computer vision its one of those things that is hard to see, but the way technological advancements are working out right now computer vision is getting amazing, he said. The type of data that were going to be able to directly feed back to the customer based off of that is actually going to be really insane. Its going to be very valuable. So thats just one of the perks of solving a problem directly is we get to put up those various sensors and cameras on this thing and kind of knock out two birds with one stone.

Around a year-and-half ago McTasney connected with representatives from Graham to see if they wanted to be involved with making the city a home base for the project. The site was also something McTasney wanted due to having land close.

We have land in Caddo as well... just East of Breckenridge. I wanted to stay around home because we do have obligations to the ranch. ...Graham is just a great community, too, he said. ...Whenever youre doing something like this, youre really grabbing everything you can to stay motivated and keep doing it and so you really want to be surrounded and supported by a community that believes in success, beliefs in new things. I think Graham did a really good job of displaying that and really got me roped in.

The company has a 4,000 square foot shop located on Rocky Mound Road in Graham and has expanded to a three-man team internally.

The company has $400,000 in the sales pipeline for orders and will be delivering its first vehicle to Oklahoma State University next week. The team has been busy showcasing the rover, most recently at the Texas and Southwestern Association Convention at the end of March.

The response has been incredible. We picked up three more customers there in one day. Thats without having any inventory, which is a really neat thing, he said. These guys know... its going to be a while there. They got about a six month lead time. So that in itself, getting people to sign a letter of intent saying that theyre going to buy one as we produce, thats... a very validated customer and a very convicted customer. So they believe in us, they really like what were doing. This is something they feel can be very useful and beneficial in their operation.

McTasney said the rover is tailoring to the actual needs of cattle ranches which is assisting with the labor shortage. While the company is focused on the Ranch Rover for pasture land for open range cow/calf operations, they plan to address another need with a feedlot machine within the next 18 months.

(Theres) a huge demand in feedlots. Thats a much bigger machine mechanically... so well focus on Ranch Rover, this pasture land model, to grow those sales numbers to continue to prove validation for investors, he said. Well move sometime in the next one-to-two years to building out a much larger machine built specifically for feedlots, which is going to be a real enterprise as this is new technology for them as well. And thats a huge labor burden, compared to the pasture land.

Read the rest here:

Ranching of tomorrow: Smooth Ag bringing robotics to ranchers with autonomous Ranch Rover - Graham Leader

Middle school robotics team going to world championship – The Sparta Independent

Three teams in the Sparta Middle School robotics program made it to the finals of the state competition and one is going on to the 2024 VEX Robotics World Championship.

That team, with three returning members from last year, placed second in states and won the Design Award, which qualified them for the world competition.

This is the programs second year competing.

Last year, five student groups competed for the first time in the VEX IQ Robotics season. Two teams made it to states and one team advanced to the 2023 VEX Robotics World Championship in Dallas.

This years team is asking for donations to help fund their trip to the world championships.

For information and to donate, go online to http://www.gofundme.com/f/sparta-middle-school-robotics-world-championship?utm_campaign=p_cp%20fundraiser-sidebar&utm_medium=copy_link_all&utm_source=customer&fbclid=IwAR0rfjdfaY3ODlFH3AAFRvivnav_RLe8lNk32Qgz87y1s7Lmgl2uoYk3buM_aem_AUQe3KULgv3YlwbYc9ypbkYVIpE5ShxJepsydptvUulCgaOpXF2_dOgBuFI3zNr2KXJ08_-coghBL0CQuV9cgFcm

See the original post:

Middle school robotics team going to world championship - The Sparta Independent

TMHS Robotics wins regional competition | Sports | homenewshere.com – Woburn Daily Times

The Tewksbury High Titans Robotics team competed in the New England North Shore District event on Saturday and Sunday, a regional robotics competition hosted by FIRST Robotics at Reading High School.

Thirty-six teams from around New England participated.

The Titans moved through the qualifying 3 v 3 rounds, and then were chosen to compete in the finals alongside Littleton (MA) High School and Windham (NH) High School thanks to their robots ability to execute a specific challenge in the arena. The three-team alliance proved strategic, taking the first 2 rounds quickly for the win.

The victory was significant for the team. The TMHS robotics team started competing in 2004, winning in 2008, and placing second 2-3 times over the years. The team will travel to another regional competition at the University of New Hampshire.

Teams compete on a pre-defined field with a driver, bot operator, and a drive coach who calls plays.

Teams range in size, skill and age at the tournament and compete six at a time (two alliances of three teams). The theme for this years competition was CRESCENDO, with challenges based around shooting notes into amps and speakers.

The team is supported by volunteer mentors, many of whom are professionals from a host of industries, including engineering, computer science, business, and more. While mentors support the design and development of the robot, they also help students hone soft skills for future college and professional success.

Scott Morris leads the team with Victor Impink, Abiche Dewilde, Berk Akinci, Chris Mullins, Randy August, Chris White and Josh Nichols. TMHS alums David Penney and Eric Impink have been mentoring the team as well.

Competitors for Tewksbury included Donovan Conway, Liam Mullins, Maya Sachdev, Renuka Late, Corvid Dewilde, Jared Woodman, Alex Grove, Christine Buskey, Jordan Troughton, Joanna Green, Becca Matte and Dylan Warren. Additional team members include Amelia Lombardi, Luc Jodoin, and Caden White.

This years challenge had the robots intaking foam rings and shooting at high and low goals, each at different angles.A climbing element was also included where robots pulled themselves off the ground and hung by a chain. The team was not without game-time challenges including overcoming a crumpled arm support, two bent shooter axles, wiring issues, and loose bolts. According to the mentors, the team kept solving problems and never missed a match.

The students have worked on the robot in the evenings and weekends since January. Team members build everything from scratch, gaining technical skills such as machining, 3D printing, laser cutting, wiring and coding, as well as learning project management, public relations and finance. The addition of a swerve drive this year, a component that allows the robot to spin and move quickly, was a gamechanger for the team.

Morris considers the team the premier STEM opportunity at TMHS and encourages community members interested in giving their time to get involved. In addition, the team is always seeking sponsors and is grateful to this years support from iRobot, Onco Filtration, Teradyne, PTC, Holt & Bugbee, Qualcomm, Tokyo Electron, BAE Systems, RTX, Routsis Training, and Tewksbury Public Schools.

Contact the team via email at frcteam1474@gmail.com for additional sponsorship opportunities or to get involved.

Go here to see the original:

TMHS Robotics wins regional competition | Sports | homenewshere.com - Woburn Daily Times

Gurman: Apple working on personal robotics as next skunkworks project – 9to5Mac

Apple turned Jetsons-style video calling into reality with FaceTime. Now the company sees personal robotics as an area worth exploration, reports Mark Gurman for Bloomberg. Is Rosey the Robot the next Jetsonian technology to become a reality?

Its way too early to know if Apple will popularize the robot house maid, but Mark Gurman has some very interesting details about a private skunk-works project going on at the company.

Engineers at Apple have been exploring a mobile robot that can follow users around their homes, said the people, who asked not to be identified because the skunk-works project is private. The iPhone maker also has developed an advanced table-top home device that uses robotics to move a display around, they said.

Gurman adds that the robotic display is further along than an Apple mobile bot for the home. However, the robo monitor has been added and removed from the companys product road map over the years, he reports.

Given the history of that product, Gurman has regularly reported on details of the iPad-like product with a robotic arm for the home over the years.

Whats different now? For starters, Apple cleared the runway for its next product category when the firm canceled its electric car project this year. AI and a continued interest in smart home technology also fuel Apples interest in home robotics. Much like the car project, though, Tesla already has shown its work on its own robotics project.

Gurman further describes the table-top robotics hardware as something that will have the display mimic the head movements such as nodding of a person on a FaceTime session. It would also have features to precisely lock on to a single person among a crowd during a video call.

Obstacles include creating something with a reasonable price and gaining executive sign-off on the project before it progresses. Gurman highlights that a job listing from Apple openly discusses next-gen Apple products that use robotics and AI, however, so there are already external signs of life for the department.

Read the full report from Bloomberg here.

FTC: We use income earning auto affiliate links. More.

View post:

Gurman: Apple working on personal robotics as next skunkworks project - 9to5Mac

The 3 Most Undervalued Robotics Stocks to Buy in April 2024 – InvestorPlace

AI led the stock market to unprecedented heights last year, beckoning interest in complementary technologies such as robotics. Thanks to game-changing advancements in AI and automation technology, the robotics space is evolving swiftly. Consequently, these developments effectively pave the way for investors to scout for the most undervalued robotics stocks to buy in April.

To be fair, the development of Robotic AI hasnt been at the same pace at which generative AI or branches of the technology are growing. Nevertheless, the sector has been showing remarkable progress, and AI can potentially take things up a few notches. Improvements in robot durability and functionality are a testament to what lies ahead. That said, three stocks are leading the charge in robotics, offering strong long-term upside potential.

Source: Sundry Photography / Shutterstock.com

Intuitive Surgical(NASDAQ:ISRG) is a force to be reckoned with in the fast-growing robotic-assisted surgical solutions industry. Its primary offering, the da Vinci Surgical System, facilitates minimally invasive operations with greater accuracy and agility.

Moreover, the da Vinci Surgical System has been a major needle-mover for ISRG, facilitating upwards of 13 million surgical procedures. Its incredible impact on the company can be seen in the 234% jump in sales to $7.1 billion from 2014 to last year. Also, its steady income streams have had a similar impact on its eye-catching bottom-line numbers.

ISRG has been killing it by posting strong numbers late despite operating in an unconducive market. It comfortably beat analyst estimates in three out of the four past quarters across both lines by considerable margins. In its most recent quarterly report, revenues were up to $1.93 billion, a 16.51% increase on a year-over-year (YOY) basis. Likewise, net income came in at an impressive $606.2 million, beating expectations by more than 85%. Additionally, with an aging population, expect ISRG to continue posting similar numbers for the foreseeable future.

Source: Michael Vi / Shutterstock.com

Defense solutions providerKratos(NASDAQ:KTOS) is a critical cog in the wheel, driving innovation through its robust product portfolio. It specializes in the development of modern military operations and the deployment of unmanned systems.

These products are effectively designed for surveillance, reconnaissance, and combat operations. Over the years, it has been an excellent wealth compounder, delivering more than 136% gain in the past decade. The impressive uptick in its price is linked to its spectacular growth in top-and-bottom-line results, marked by double-digit gains across key metrics. Moreover, recent results have been a visual treat for its investors, with it outperforming estimates across both lines in the past seven consecutive quarters.

Furthermore, as a recent article from my fellow InvestorPlace colleague, Larry Ramer, explains, Kratos has recently inked some massive contracts from the U.S. government. Perhaps the most noteworthy is its $579 million deal with the U.S. Space Force. Additionally, in March alone, it received contracts exceeding $550 million in value from the Pentagon.

Source: Daniel J. Macy / Shutterstock.com

ABB(OTCMKTS:ABBNY) is a top pick in the burgeoning industrial automation, leveraging AI to push the envelope in the Industrial Internet of Things (IIOT) industry. ABBNY stock got a strong AI-powered boost in the stock market last year,gaining over 34%.

According to ABB, roughly 20% of the data produced by industrial entities undergoes analysis and an even smaller fraction results in actionable insights. Hence, it is looking to pounce on this underserved market with its power Genix software, which harnesses AI for industrial analytics, aiming to unlock valuable insights for its customers.

Despite AIs disruptive impact, ABB doesnt solely rely on its software analytics business. It runs a diversified operation providing robotics, automation, electrification, and motion products globally. Moreover, given the diversity in its revenue base, it operates a highly consistent business thats been exceptionally profitable across key metrics. On top of that, it offers a growing dividend,yielding over 2.1%.

On the date of publication, Muslim Farooque did not have (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Muslim Farooque is a keen investor and an optimist at heart. A life-long gamer and tech enthusiast, he has a particular affinity for analyzing technology stocks. Muslim holds a bachelors of science degree in applied accounting from Oxford Brookes University.

View post:

The 3 Most Undervalued Robotics Stocks to Buy in April 2024 - InvestorPlace

Why household robot servants are a lot harder to build than robotic vacuums and automated warehouse workers – The Conversation

With recent advances in artificial intelligence and robotics technology, there is growing interest in developing and marketing household robots capable of handling a variety of domestic chores.

Tesla is building a humanoid robot, which, according to CEO Elon Musk, could be used for cooking meals and helping elderly people. Amazon recently acquired iRobot, a prominent robotic vacuum manufacturer, and has been investing heavily in the technology through the Amazon Robotics program to expand robotics technology to the consumer market. In May 2022, Dyson, a company renowned for its power vacuum cleaners, announced that it plans to build the U.K.s largest robotics center devoted to developing household robots that carry out daily domestic tasks in residential spaces.

Despite the growing interest, would-be customers may have to wait awhile for those robots to come on the market. While devices such as smart thermostats and security systems are widely used in homes today, the commercial use of household robots is still in its infancy.

As a robotics researcher, I know firsthand how household robots are considerably more difficult to build than smart digital devices or industrial robots.

One major difference between digital and robotic devices is that household robots need to manipulate objects through physical contact to carry out their tasks. They have to carry the plates, move the chairs and pick up dirty laundry and place it in the washer. These operations require the robot to be able to handle fragile, soft and sometimes heavy objects with irregular shapes.

The state-of-the-art AI and machine learning algorithms perform well in simulated environments. But contact with objects in the real world often trips them up. This happens because physical contact is often difficult to model and even harder to control. While a human can easily perform these tasks, there exist significant technical hurdles for household robots to reach human-level ability to handle objects.

Robots have difficulty in two aspects of manipulating objects: control and sensing. Many pick-and-place robot manipulators like those on assembly lines are equipped with a simple gripper or specialized tools dedicated only to certain tasks like grasping and carrying a particular part. They often struggle to manipulate objects with irregular shapes or elastic materials, especially because they lack the efficient force, or haptic, feedback humans are naturally endowed with. Building a general-purpose robot hand with flexible fingers is still technically challenging and expensive.

It is also worth mentioning that traditional robot manipulators require a stable platform to operate accurately, but the accuracy drops considerably when using them with platforms that move around, particularly on a variety of surfaces. Coordinating locomotion and manipulation in a mobile robot is an open problem in the robotics community that needs to be addressed before broadly capable household robots can make it onto the market.

In an assembly line or a warehouse, the environment and sequence of tasks are strictly organized. This allows engineers to preprogram the robots movements or use simple methods like QR codes to locate objects or target locations. However, household items are often disorganized and placed randomly.

Home robots must deal with many uncertainties in their workspaces. The robot must first locate and identify the target item among many others. Quite often it also requires clearing or avoiding other obstacles in the workspace to be able to reach the item and perform given tasks. This requires the robot to have an excellent perception system, efficient navigation skills, and powerful and accurate manipulation capability.

For example, users of robot vacuums know they must remove all small furniture and other obstacles such as cables from the floor, because even the best robot vacuum cannot clear them by itself. Even more challenging, the robot has to operate in the presence of moving obstacles when people and pets walk within close range.

While they appear straightforward for humans, many household tasks are too complex for robots. Industrial robots are excellent for repetitive operations in which the robot motion can be preprogrammed. But household tasks are often unique to the situation and could be full of surprises that require the robot to constantly make decisions and change its route in order to perform the tasks.

Think about cooking or cleaning dishes. In the course of a few minutes of cooking, you might grasp a saut pan, a spatula, a stove knob, a refrigerator door handle, an egg and a bottle of cooking oil. To wash a pan, you typically hold and move it with one hand while scrubbing with the other, and ensure that all cooked-on food residue is removed and then all soap is rinsed off.

There has been significant development in recent years using machine learning to train robots to make intelligent decisions when picking and placing different objects, meaning grasping and moving objects from one spot to another. However, to be able to train robots to master all different types of kitchen tools and household appliances would be another level of difficulty even for the best learning algorithms.

Not to mention that peoples homes often have stairs, narrow passageways and high shelves. Those hard-to-reach spaces limit the use of todays mobile robots, which tend to use wheels or four legs. Humanoid robots, which would more closely match the environments humans build and organize for themselves, have yet to be reliably used outside of lab settings.

A solution to task complexity is to build special-purpose robots, such as robot vacuum cleaners or kitchen robots. Many different types of such devices are likely to be developed in the near future. However, I believe that general-purpose home robots are still a long way off.

Original post:

Why household robot servants are a lot harder to build than robotic vacuums and automated warehouse workers - The Conversation

Robots Go to Work in Japanese C-Stores – NACS Online

TOKYOIn Japan, robots are restocking the shelves of some of the countrys conbini, which are small stores that sell snacks, drinks and miscellaneous items, reports the Associated Press. The robot is called TX SCARA, and the units are in 300 out of the 16,000 FamilyMart stores in Japan. There are about 56,000 conbini in Japan.

The TX SCARA has a hand on one end of its mechanical arm, and it grabs beverages from stacks on the sides of the shelves and then restocks the shelves correctly, using AI and cameras to figure out what beverages on the shelf need to be replaced. The robot can restock up to 1,000 beverages a day.

We want to automate all the repetitive jobs and boring jobs done by humans. That is the direction we are going. And the best way to do that is to use the robots, said Jin Tomioka, CEO of Tokyo-based Telexistence, which created TX SCARA told AP.

Many of the Japanese conbini are open 24-7 and have thousands of products but few workers. The robots by Telexistence are meant for established retailers, and theres no need to change current store layouts. The robots are reportedly more affordable than industrial robots and are designed to coexist and collaborate with people, completing routine tasks.

The robots allow for remote control, and Telexistence employees can remotely see problems with the robots as they happen, such as a dropped beverage in the case of the TX SCARA robot.

Japans population is aging, leaving the country with a labor shortage that is expected to worsen. FamilyMart CEO Tomohiro Kano referred to the Japanese expression seeking even a cats paw for help to describe how the labor situation might escalate.

At FamilyMart, we are seeking a robots arm for help, he told the AP.

In the U.S., robot labor is growing amid the countrys own labor shortage. Robot orders for workplaces increased 40% during the first quarter of 2022 compared with the first quarter of 2021, which is a record amount. The U.S. has been slower to embrace robotics than other industrialized countries. The number of robots deployed in the U.S. per 10,000 workers has traditionally trailed countries such as South Korea, Japan and Germany. But Americas attitude is shifting.

With many industry observers stating the pandemic has triggered a fundamental reset of retail, new technologies including robotics, machine learning and AI also are being more rapidly deployed to enable operators to respond to the new norm. Read more in the NACS Magazine feature, Robots Deliver.

Mark your calendars for February 28 to March 2, 2023, when NACS Convenience Summit Asia heads to Bangkok, Thailand, where youll be transported into the epicenter of retail disruption and innovationAsiafor an immersive look into the future of convenience retailing. Sign up to be notified when registration opens.

Read more here:

Robots Go to Work in Japanese C-Stores - NACS Online

Robotics hiring levels in the offshore industry rose in August 2022 – Offshore Technology

The proportion of offshore oil and gas industry operations and technologies companies hiring for robotics related positions rose in August 2022 compared with the equivalent month last year, with 24.7% of the companies included in our analysis recruiting for at least one such position.

This latest figure was higher than the 19% of companies who were hiring for robotics related jobs a year ago and an increase compared to the figure of 18.5% in July 2022.

When it came to the rate of all job openings that were linked to robotics, related job postings dropped in August 2022 from July 2022, with 0.9% of newly posted job advertisements being linked to the topic.

This latest figure was an increase compared to the 0.6% of newly advertised jobs that were linked to robotics in the equivalent month a year ago.

Robotics is one of the topics that GlobalData, from whom our data for this article is taken, have identified as being a key disruptive force facing companies in the coming years. Companies that excel and invest in these areas now are thought to be better prepared for the future business landscape and better equipped to survive unforeseen challenges.

Our analysis of the data shows that offshore oil and gas industry operations and technologies companies are currently hiring for robotics jobs at a rate higher than the average for all companies within GlobalData's job analytics database. The average among all companies stood at 0.6% in August 2022.

GlobalData's job analytics database tracks the daily hiring patterns of thousands of companies across the world, drawing in jobs as they're posted and tagging them with additional layers of data on everything from the seniority of each position to whether a job is linked to wider industry trends.

You can keep track of the latest data from this database as it emerges by visiting our live dashboard here.

CCTV Products and Surveillance Systems

Read the original post:

Robotics hiring levels in the offshore industry rose in August 2022 - Offshore Technology

Legged Robots to Aid with Planetary Research – USC Viterbi | School of Engineering – USC Viterbi School of Engineering

(Photo/Courtesy of Feifei Qian)

Every night across the globe, people look up and see the moon. Some nights, it appears as a small sliver; other nights, it is full and lights up the dark sky. What is this satellite made of? Does it have water? Has it ever sustained animal or plant life?

Robots have long assisted scientists in answering these and other questions about the moon and the planets that grace our solar system. However, todays rovers have generally had wheels that could get stuck in planets terrain, sometimes causing them to abort important exploration missions.

Feifei Qian, a WiSE Gabilan Assistant Professor at the USC Viterbi School of Engineering, is leading a three-year $3-million research project funded by NASA to create legged robots that could more easily glide through icy surfaces, crusted sand and other difficult-to-navigate environments, significantly enhancing scientists abilities to gather information from planetary bodies.

The overarching goal of this endeavor is to understand how to integrate robotics technology with both planetary science and cognitive science, to improve robot-aided exploration of planetary environments. Essentially, this project aims to create next-generation high-mobility robots and rovers that can easily move through planetary surfaces and flexibly support human scientists exploration goals.

This project employs bio-inspired robots with legs, meaning their form is modeled after animals unique abilities to move well on challenging surfaces like soft sand. Utilizing the latest direct-drive actuator technology, these robots can feel the terrain (e.g., sand softness, rock shapes) from their legs. This ability allows the legged robots to interact with the environment in the same manner as animals, adjusting their movement as needed.

As lead investigator Qian puts it, these robots are modeled in a manner that allows them to not just mimic how the animals look, but really understand what makes these animals successful on different terrains.

The ability to feel the terrain using legs also allows these legged robots to easily gather information about the environment as they move around, and adjust exploration strategies based on this information. Integrated with additional scientific instrumentation, these robots can collect a large amount of useful information as they walk around planetary surfaces.

For the many planetary environments that we would like to explore, we would like to send rovers and robots to gather information before sending humans, said Qian, Even for environments where its safe to send astronauts, mobile robots can integrate scientific instrumentation and help take precise measurements while moving around.

While the Mars Exploration Rovers and other robots have been successfully sent into space, they typically operate based on pre-programmed agenda, which means human scientists and engineers need to input detailed instructions on where to go, and what to do, prior to the robots arrival at the planet. As a result, when the robot encounters any unexpected scenarios or discovers any interesting measurements, it has limited capabilities to adapt its plan. This could hinder the robots or rovers ability to effectively navigate the new environments, or even miss out on opportunities to make important scientific discoveries.

Qians research project, LASSIE, meaning Legged Autonomous Surface Science In Analogue Environments, seeks to make it possible for robots to simultaneously move more effectively in various environments while gathering information about them. This information has the potential to allow researchers to understand more about what these planets look like and how they react to disturbances. By understanding how human scientists interpret this information and adapt their exploration plans, roboticists and cognitive scientists on the LASSIE team will together create intelligent robots that can begin to make exploration decisions like a scientist. One of the questions guiding this research, said University of Pennsylvania Professor Doug Jerolmack, a co-investigator, is how do we exploit a robot most efficiently, so it takes on some of the burdens of decision making?

The NASA project will fund Qian and her team to test these legged robots at locations such as Mount Hood, Oregon, and White Sands, New Mexico, which mimic the terrains of planets such as Mars and the moon. Analyzing how Qians bio-inspired robots perform on earth will allow researchers to make tweaks before the legged robots. USC Viterbis Prof. Feifei Qian is the lead researcher for a $3-million project that could significantly improve robots ability to aid with planetary explorations deployed on other planets and the moon.

One of the legged robots measuring regolith strength in White Sands, NM. Photo/Courtesy Feifei Qian

The research group consists of Qian, eight co-investigators from various research universities, including Texas A&M University, University of Pennsylvania, Oregon State University, Georgia Institute of Technology, as well as the NASA Johnson Space Center. Much of the NASA funding supports the students who work on this project.

This is the dream team and a very rare chance to bring a team with all the components into one project, Qian said.

Qian joined USC Viterbi as an assistant professor in 2020. She holds a masters degree in physics and a Ph.D. in electrical and computer engineering from the Georgia Institute of Technology.

The technology and understandingthat this project will develop will also be beneficial for scientific explorations on earth [because] our project will provide understandings of how human scientists make sampling decisions and adapt exploration strategies in response to incoming measurements, said Qian. With a team of roboticists, earth and planetary scientists, and cognitive scientists working together, the LASSIE team led by Prof. Qian will create the next-generation robots and rovers that can significantly expand our knowledge about the moon and other planets.

Published on September 7th, 2022

Last updated on September 7th, 2022

Originally posted here:

Legged Robots to Aid with Planetary Research - USC Viterbi | School of Engineering - USC Viterbi School of Engineering

HAI ROBOTICS Japan and Gaussy to Cooperate in Robot Subscription Services and Exhibit HAIPICK A42N at Logis-Tech Tokyo 2022 | RoboticsTomorrow -…

HAI ROBOTICS Japan and Gaussy Inc. signed a distribution agreement and will exhibit the comprehensive "Roboware" solution at Logis-Tech Tokyo 2022.

HAI ROBOTICS Japan and Gaussy Inc. signed a distribution agreement in August 2022 to promote the HAIPICK ACR series in Japan. The collaboration will adopt HAIPICK A42N as a robot subscription service "Roboware" provided by Gaussy, including warehouse solutions provided by both companies. The two companies are planning to conduct various joint demonstration activities. This time the two companies will exhibit the comprehensive "Roboware" solution at "Logis-Tech Tokyo 2022" and demonstrate the benefits of "HAIPICK A42N."

Joint exhibition solutionsHAIPICK ACR system "HAIPICK A42N" Demo: The ACR robot was independently developed by HAI ROBOTICS. It supports mixing picking of cartons/totes, such as cases of 160cm in size, larger than the standard, and increases the efficiency of picking. It also realizes high-density storage. The HAIPICK ACR system is one of HAI ROBOTICS' main products and has been implemented across a large number of projects. Currently, the company has implemented more than 500 projects worldwide.

Gaussy exhibition booth: The company will exhibit its Warehouse robot subscription service "Roboware." In addition to that, visitors will have the chance to see the Three-dimensional Sorting Robot "Omni Sorter," the Shelf Transfer Robot "Ranger GTP," the Pallet Transport Robot "Ranger IL," the Autonomous Robot "FlexComet/FlexSwift," and two Sharing-type Warehouse Service "WareX.."

About the exhibitionEvent name: Logis-Tech Tokyo 2022

Date: Tuesday, September 13 - Friday, September 16, 2022, 10:00 - 17:00

Venue: Tokyo Big Sight (Tokyo International Exhibition Center) East Halls 1-8

Booth No.: East Hall 5 Booth 5-502 Display Name "Gaussy Co., Ltd."

Click here for pre-registration

About HAI ROBOTICS JAPAN, Inc.Founded in 2016 in Shenzhen, China, HAI ROBOTICS is the pioneer and leader in Autonomous Case-handling Robot (ACR) systems. The company provides efficient, intelligent, flexible and customized warehouse automation solutions through robotics technology and AI algorithms. Since its establishment, it has grown rapidly by incorporating global needs and providing a comprehensive ACR system developed in-house. The company was ranked in the "Unicorn Ranking List" of Hu Run Research Institute in December 2021. HAI ROBOTICS now has more than 500 projects globally and offices in the U.S., Europe, Japan, Southeast Asia, Australia, Hong Kong and Taiwan, serving customers from more than 30 countries and regions.

HAI ROBOTICS JAPAN Co., Ltd., was established in 2021 in Japan as a subsidiary of HAI ROBOTICS Co., Ltd., and promotes automation and DX in the logistics and manufacturing industry through one-stop services from the introduction of ACR systems to operational support. Aiming to do business activities specialized in the Japanese market. HAI ROBOTICS Japan Technical Center was opened in March 2022 to demonstrate and simulate ACR products and solutions.

About Gaussy Inc.With the vision of "Logistics gets you there," Gaussy will provide new options for businesses by building a system that can flexibly respond to changes in warehouse needs and cargo volume. Gaussy offers two services: "Roboware," a subscription-type monthly warehouse robot service that allows anyone to easily operate a warehouse using robots, and "WareX," a sharing warehouse service that allows anyone to easily use vacant warehouse space.

Here is the original post:

HAI ROBOTICS Japan and Gaussy to Cooperate in Robot Subscription Services and Exhibit HAIPICK A42N at Logis-Tech Tokyo 2022 | RoboticsTomorrow -...