THANK YOU FOR SUBSCRIBING
CBP Software’s Collaborative Business Planning platform builds Digital Twin of the Organization (DTO) business models, integrating operational, financial, and resource data to simulate scenarios, reveal bottlenecks, and support planning, process improvement, and profitability analysis on QualiWare’s cloud-based environment.
Drishya AI is a Calgary-based deep-tech company using AI-powered digital twins and tools like ARTISAN and BRAINS to turn engineering data into contextual intelligence, enabling brownfield industrial plants to optimize operations, accelerate simulations, and advance decarbonization and net-zero goals.
Prevu3D offers a reality capture–based 3D digital twin platform connecting engineering and operations, with products like RealityPlatform, RealityTWIN, RealityPlan, and RealityConnect that help asset-intensive industries plan layouts, validate designs, and streamline workflows using immersive as-built twins.
RedLine Digital Twin delivers 3D scanning and digital twin services that create precise virtual records and interactive tours of physical spaces, helping Alberta businesses improve marketing, planning, training, and engineering workflows with accurate point clouds and immersive digital experiences.
VEERUM provides a visual asset management and VisOps platform that unifies massive industrial datasets into browser-based 3D environments, enabling remote collaboration, progress tracking, and decision-making for asset-intensive industries, delivering strong ROI across billions in managed assets.
ThoughtWire delivers a patented real-time digital twin operating system for built environments and healthcare, orchestrating data from buildings, districts, cities, and hospitals to optimize operations, ESG performance, and connected user experiences across HealthTech and PropTech use cases.
More in News
Wednesday, February 04, 2026
FREMONT, CA: Technology keeps pushing the envelope in our search for healthier, cleaner air by bringing cutting-edge ways to reduce pollution and improve our quality of life. Living technology is one of these innovations that is a promising new direction for air cleaning. Living technology, which harnesses the power of nature itself, provides a workable and sustainable solution for enhancing indoor air quality, opening the door to a healthier and more ecologically sensitive future. Air is purified using living technology, which uses fungi, bacteria, and plants. The natural systems inspire this idea of a place for purifying and cleansing our air. For example, plants have long been recognized for their capacity to use photosynthesis to release oxygen and absorb carbon dioxide. However, new studies have shown their effectiveness in eliminating dangerous indoor pollutants, such as formaldehyde, benzene, and volatile organic compounds (VOCs). One of the key advantages of living technology lies in its sustainability. Unlike traditional air purifiers that rely on mechanical filters or chemical processes, living systems use natural processes requiring minimal energy input. By harnessing the inherent capabilities of living organisms, these technologies offer a renewable and eco-friendly alternative to conventional air purification methods. Moreover, they contribute to indoor greening efforts, enhancing the aesthetic appeal of spaces and improving air quality. Biofilters, which use a combination of microbes and plant roots to remove pollutants from the air, are an example of living technology in action. These biofilters offer scalable solutions for various indoor situations, from homes and workplaces to schools and hospitals. They can be installed as standalone units or incorporated into already-existing HVAC systems. Biofilters produce cleaner and fresher indoor air, efficiently breaking down pollutants and transforming them into innocuous byproducts through the symbiotic relationship between plants and microbes. Living technology has positively impacted human health and well-being. Research shows that indoor plants can improve mood, stress levels, and cognitive performance. Living technology promotes comfort and productivity by making interior spaces healthier and more enjoyable, making them a priceless tool for household and business settings. Furthermore, vegetation helps lessen the symptoms of Sick Building Syndrome (SBS), a disorder linked to poor indoor air quality that can cause exhaustion, headaches, and respiratory problems. As society grapples with the challenges of urbanization and climate change, the need for sustainable solutions to environmental issues becomes increasingly urgent. Living technology offers a holistic approach to air purification that aligns with our growing emphasis on sustainability and green living. By leveraging the power of nature's mechanisms, we can create healthier indoor environments while minimizing our ecological footprint. Moreover, the integration of living technology into building designs and urban planning strategies holds the potential to transform cities into healthier and more livable spaces for all.
Tuesday, February 03, 2026
Fremont, CA: Haptic technology utilizes tactile feedback to demonstrate a sense of touch, rapidly transforming how humans interact with digital devices. While traditionally used in video game controllers and mobile phones, this technology is now being applied in various fields to present a new way of engaging people and communicating with the rest of the world. It will redefine user experiences across a broad spectrum—from entertainment to medicine and even virtual reality—since it can pass real-world touch sensations to the device user. Haptic technology is changing the way people have immersive experiences in virtual and augmented reality (VR and AR) by communicating tactile feedback that connects users with their actions. This technology simulates the sensation of touching or grasping an object and makes virtual environments more engaging and immersive. As gaming, training, and simulation are increasingly used with VR and AR, haptic technology will have its specific role in making experiences more compelling and lifelike. Beyond the realm of entertainment, applications of haptic technology are limitless. For instance, opportunities exist in the health sector to explore haptic feedback in remote surgery or rehabilitation areas. For example, surgeons who perform remote operations using robotic systems can use haptic devices to feel different textures and resistances of tissues as if they were physically performing the surgery. This improves accuracy and allows the surgeon to evaluate the situation better, reducing the risks of remote procedures. Haptic technology is also incorporated into physical therapy tools to aid patients in rehabilitation exercises. Tactile cues will then guide patients through movements during exercises while helping them perform tasks properly, thereby improving recovery outcomes. Haptic technology has transformed wearable devices, like smartwatches or fitness trackers, providing interactive and personalized experiences. These devices use soft vibration alerts when there is a message or reminder, thus improving communication without visible visuals. On the fitness applications, haptic feedback will guide users through workout routines and add another interactive layer, making wearable devices more intuitive and responsive. The automotive industry is also exploring applications for haptic technology to enhance safety for drivers and passengers. Haptic feedback integrated into steering wheels, seat belts, or vehicle seats alerts the driver to potential hazards or changes in driving conditions. For example, a vibrating steering wheel can act as a 'lane departure warning system' wherein it vibrates if the vehicle drifts away from its intended lane while vibrating seats alert the driver to slow down. Vehicles can offer a more holistic and intuitive safety system by combining haptic cues with other visual and auditory signals. Haptic technology is poised to revolutionize how we engage with the digital world. Since it provides tangible, physical feedback, it makes it a more immersive, interactive, and personalized experience across various sectors like entertainment, healthcare, wearables, and automotive industries. As the technology continues to evolve further and expand its applications, the future of interaction with devices will take new shapes to make engagement possible in ways once considered impossible. After all, the future of engagement is tactile, and haptic technology is leading the way.
Monday, February 02, 2026
FREMONT, CA: With uses in industrial processes, environmental monitoring, and scientific study, underwater robotics is becoming more and more well-known as a game-changing technology. The cutting-edge devices, which are made to function in aquatic settings, transform the way we investigate and make use of underwater worlds. The developing field of underwater robotics is changing a number of industries, from underwater infrastructure maintenance to deep-sea exploration. The capabilities of underwater robotics have been greatly expanded by recent technical developments. Underwater robot performance and functionality have improved thanks to advancements in sensors, cameras, and communication systems. Improvements in battery life and energy-efficient systems enable robots to operate for extended periods, reaching greater depths and covering larger areas. The technological advancements have expanded the range of tasks underwater robots can perform, from scientific research to industrial inspections. Underwater robotics plays a crucial role in scientific research and exploration, particularly in exploring deep-sea environments that are otherwise inaccessible. They provide valuable data on oceanography, marine biology, and climate change. Underwater robots have been instrumental in discovering new species, mapping the ocean floor, and monitoring environmental changes. High-resolution imaging and advanced sonar technologies allow for detailed mapping and exploration of underwater environments. Their ability to operate in extreme conditions and capture high-quality data has revolutionized marine science and exploration. In the industrial sector, underwater robotics are increasingly used for inspecting and maintaining underwater infrastructure. It includes oil and gas pipelines, offshore platforms, and underwater cables. Underwater robots equipped with specialized tools can perform tasks such as welding, cleaning, and repairing, reducing the need for human divers and minimizing risks associated with underwater work. The application is precious in remote or hazardous locations where traditional methods are challenging or unsafe. Environmental monitoring and conservation efforts benefit significantly from the use of underwater robotics. Underwater robotics are increasingly used in search and recovery operations, particularly in locating and retrieving objects from the sea floor. It includes searching for sunken ships, aircraft, and lost cargo. The ability to perform inspections, maintenance, and research tasks with underwater robots minimizes the need for expensive human interventions and reduces downtime. The use of underwater robotics in these operations enhances the effectiveness and safety of search missions, providing precise and detailed information that aids recovery efforts. AI algorithms enable robots to process and analyze large volumes of data in real-time, enhancing their decision-making and autonomy. Integrating AI and ML into underwater robotics further advances their capabilities. ML allows robots to adapt to changing environments, recognize patterns, and perform complex tasks more precisely. The rise of underwater robotics brings economic benefits by reducing operational costs and increasing efficiency. The development and deployment of underwater robotics create job opportunities in engineering, technology, and marine sciences, contributing to economic growth in these fields. The capabilities of underwater robotics will further enhance their impact across various sectors, driving innovation and improving our understanding of the underwater world.
Monday, February 02, 2026
The geospatial industry is witnessing a shift that is as significant as the transition from theodolites to GPS. At the epicenter of this transformation is the convergence of Unmanned Aerial Vehicles (UAVs) and advanced photogrammetry. While aerial surveying has existed for a century, the field has shifted beyond simple photography into an era of computational photogrammetry. In this new phase, high-resolution imagery is transformed into mathematically rigorous, centimeter-accurate 3D terrain models, democratizing high-precision data. This evolution is not merely about capturing a bird’s-eye view; it is about digitizing the physical world. Modern drone surveying workflows now allow surveyors, engineers, and land managers to reconstruct reality with a level of fidelity that rivals traditional terrestrial methods, but with exponentially higher speed and coverage. The process converts 2D pixels into 3D coordinates, transforming flat images into actionable spatial data. Flight Geometry and Sensor Fidelity High-fidelity 3D modeling depends fundamentally on the quality and precision of data acquisition, beginning with the sensor technology used during capture. Modern survey-grade drones now employ mechanical global shutters that eliminate the geometric distortions associated with electronic rolling shutters, particularly during high-speed flight. This advancement ensures each frame preserves accurate spatial relationships. Equally important is the flight path: photogrammetry relies on parallax, which is achieved through structured-grid missions designed to maintain high forward (75–80 percent) and side (60–70 percent) overlap. Such redundancy enables software to triangulate depth by observing the same ground features from multiple perspectives. Ground Sampling Distance (GSD) has further become the benchmark for evaluating resolution, with lower GSD values directly correlating with more detailed and reliable terrain outputs. To complement nadir imagery, current workflows incorporate oblique captures—typically at 30–45 degrees—to enhance the reconstruction of vertical faces, built structures, and complex landscapes. While nadir images provide strong planar accuracy, oblique perspectives introduce critical side-wall visibility, allowing models to transition from simple surface projections to fully realized volumetric representations. This integrated approach ensures that modern 3D models deliver both geometric accuracy and comprehensive spatial completeness. Algorithmic Alchemy: Structure from Motion (SfM) and Point Clouds Once data acquisition is complete, the primary workload shifts from the drone to the processing workstation, where photogrammetric reconstruction begins. This process is powered by Structure from Motion (SfM), an advanced algorithmic technique that simultaneously estimates both camera parameters and scene geometry—an improvement over traditional photogrammetry, which required predefined camera positions. The system performs feature extraction by scanning thousands of images to identify millions of key points, such as pavement edges, rocks, and distinct surface textures. These features are then matched across overlapping images, allowing the software to track specific points captured from different viewpoints. When a point is identified across multiple photos, its precise three-dimensional position can be determined by triangulation using collinearity principles. This process produces a sparse point cloud that serves as the initial geometric framework for the terrain. Subsequently, a bundle block adjustment refines this framework through rigorous mathematical optimization, minimizing discrepancies between observed and reconstructed point locations and ensuring a cohesive geometric solution. The culmination of these steps is the generation of a dense point cloud, which in modern workflows often comprises hundreds of millions of points. Each point includes both spatial coordinates and RGB values, resulting in a highly detailed, photorealistic representation of the surveyed area—often exceeding the density of traditional ground-based measurements. A critical enhancement to this workflow is the integration of Real-Time Kinematic (RTK) and Post-Processing Kinematic (PPK) positioning. By recording the drone’s position with centimeter-level accuracy at the moment each image is captured, the resulting point cloud is automatically aligned to the correct coordinate system. This significantly reduces reliance on physical Ground Control Points (GCPs), streamlines field operations, and maintains high global accuracy throughout the final dataset. From Data to Intelligence: Orthomosaics and Digital Elevation Models Photogrammetry derives its value from the deliverables produced from the point cloud, which have become standardized across the industry as orthomosaics and elevation models. An orthomosaic is not merely a stitched aerial panorama; it is a geometrically corrected image created through orthorectification using the underlying elevation model. This correction removes perspective distortion, eliminates scale variation caused by terrain relief, and produces a map-accurate image with consistent scale throughout. As a result, users can measure distances, areas, and angles directly on the orthomosaic with confidence. Advanced blending algorithms ensure seamless transitions between individual images, balancing color and exposure to create a continuous, uniform representation of the site. The 3D information derived from photogrammetry is further processed into grid-based elevation models, primarily distinguished as Digital Surface Models (DSMs) and Digital Terrain Models (DTMs). A DSM reflects the captured surface, including vegetation, structures, and other objects, making it valuable for applications such as line-of-sight analysis and obstruction assessment. In contrast, a DTM isolates bare earth by filtering out non-ground points using sophisticated classification algorithms, thereby generating an accurate representation of the underlying terrain. These models serve as the foundation for generating topographic contours, which modern software produces directly from the DTM, offering surveyors complete site coverage rather than relying on interpolated grid points. The dataset's volumetric nature enables precise stockpile volume calculations and detailed cut-and-fill analysis, supporting accurate earthwork planning by comparing existing conditions with design surfaces. Today, photogrammetry in drone surveying is defined by integration and automation. It is a workflow in which the physical acquisition of images and the digital reconstruction of geometry are tightly intertwined. By leveraging high-resolution sensors, precise flight paths, and powerful SfM algorithms, the industry has established a terrain-modeling method that is both scalable and scientifically rigorous.
Friday, January 30, 2026
FREMONT, CA: Innovative technologies like smart patches and socks, as well as a critical move towards remote patient monitoring (RPM), are driving the growth of the global digital health industry. This development ushers in a new era of home-based patient care, revolutionizing the provision and use of medical services. AI is enhancing healthcare accessibility and efficiency. The leap from mere gadgets to life-changing tools has begun, with AI playing a crucial role in bridging the gap between patients and healthcare services. Telemedicine has emerged as a beacon of accessibility, especially after recent global challenges. Telemedicine offers healthcare consultations through phone calls or video sessions, providing convenient patient care access. Devices such as blood pressure cuffs, smartwatches, and glucometers contribute significantly to this trend, encouraging a healthier lifestyle and reducing hospital visits by enabling continuous monitoring. The wearable technology market is navigating an innovative phase, introducing smart patches and intelligent clothing. These devices blend seamlessly into users' daily lives, monitoring health metrics non-invasively. This trend reflects a broader industry move towards integrated healthcare solutions, which are crucial for managing chronic conditions and promoting wellness. Mental health technology has undergone a digital renaissance, with numerous apps offering personalized solutions for managing emotional well-being. These tools not only enhance self-awareness but also facilitate effective management of mental health conditions. Innovations like therapeutic video games represent a blend of diagnosis and treatment, offering a novel, engaging approach to mental health care. The advent of AR and VR technologies is redefining medical training and patient recovery. These technologies offer immersive experiences, creating a risk-free platform for medical professionals to practice and patients to engage in recovery exercises. As technology evolves, the healthcare sector stands at the precipice of a significant transformation. The focus on personalization, accessibility, and effectiveness in healthcare through technology heralds a new era in medical care and patient experience.
Friday, January 30, 2026
Fremont, CA: Palletizing automation is essential as supply chains and logistics change. Humans must intervene since this makes it difficult to adapt to unanticipated situations. Exoskeletons can be useful in situations involving heavy lifting. Exoskeletons with Power, Versatility, and Human Intelligence Exoskeletons are wearable devices that protect workers in heavy-lifting industries. They use artificial intelligence to provide individualized support and integrate manual work into the digital age. These smart exoskeletons can lift heavy things and handle unexpected challenges, but we're unsure what role they play in process automation. Human Ingenuity Meets Automation Efficiency Modern supply chain management relies on human intelligence integration, especially palletization. Automation is getting bigger in warehouses, but it's rare to have a complete automation system, especially if the items aren't standard. Automation solutions have to be feasible and cost-effective. To be efficient and reliable, you need a human touch. Exoskeletons can help. Palletization operations can benefit from workers' unique ability to adapt, troubleshoot, and make intuitive decisions. Versatility Meets Power Powered exoskeletons are powerful tools that support the lower back, reducing strain during palletizing processes. With battery-powered robotic motors, workers can accomplish physically demanding tasks more safely and accurately. With this, they're great in warehouses where lifting heavy objects of all shapes and sizes is a regular occurrence. The Magic of Connectivity and Artificial Intelligence Exoskeletons, advanced tools, can provide personalized support and fewer workplace accidents. With AI-connected exoskeletons, movements and posture are monitored in real-time. Power suits with smart sensors can warn users if they're in bad posture or lifting incorrectly, reducing fatigue and accidents. This early warning system makes workers safer, reduces sickness and absenteeism, and increases productivity. Connectivity between exoskeletons and a company's IT ecosystem enables real-time communication between workers, machines, and systems.