Add reliable, up-to-date reality data to all your projects

The Reality Modeling WorkSuite is a sales bundle that offers you access to Bentley’s most popular reality modeling applications ContextCapture, ContextCapture Editor, and ProjectWise ContextShare, at a discounted price. Reality capture through 3D photogrammetry or laser scanning can be difficult. Save time with Bentley’s solutions and continually produce high-fidelity 3D reality models.

Reality Modeling WorkSuite, ContextCapture Editor

End-to-end solution for adding digital context to your projects

Bentley’s reality modeling software can handle any size, and from many sources including point cloud, imagery, textured 3D mesh, and traditional GIS resources. You can integrate and combine all your reality data into one single digital context. Easily visualize and navigate 3D mapping data real-time in full 2D and 3D. Take advantage of automated measurements and extract features for asset inventory, terrain creation, and asset verification and attribution. Add real-world context throughout the lifecycle of projects in design, construction, and operations.

Testimonial_Quote_Marks
“Bentley’s ContextCapture enabled multi-engine processing using not just images but point clouds, something that no other software provided. It proved to be consequential in completing the project with quality and client satisfaction.”

Technical Capabilities

The Reality Modeling WorkSuite sales bundle includes ContextCapture, ContextCapture Editor, and ProjectWise ContextShare.

ContextCapture and ContextCapture Editor

Produce 3D models of existing conditions for infrastructure projects, derived from photographs and/or point clouds. These highly detailed, 3D reality meshes provide precise real-world context for design, construction, and operations decisions throughout the lifecycle of a project. Develop precise reality meshes affordably with less investment of time and resources in specialized acquisition devices and associated training. You can easily produce 3D models using photos taken with an ordinary camera and/or LiDAR point clouds captured with a laser scanner, resulting in fine details, sharp edges, and geometric accuracy.  Dramatically reduce processing time with the ability to run two ContextCapture instances in parallel on a single project.

Extend your capabilities to extract value from reality modeling data with ContextCapture Editor, a 3D CAD module for editing and analyzing reality data, included with ContextCapture. ContextCapture Editor enables fast and easy manipulation of meshes of any scale as well as the generation of cross sections, extraction of ground and breaklines, and production of orthophotos, 3D PDFs, and iModels. You can integrate your meshes with GIS and engineering data to enable the intuitive search, navigation, visualization, and animation of that information within the visual context of the mesh to quickly and efficiently support the design process.

ContextCapture Technical Capabilities

Input
  • Multiple camera project management
  • Multi-camera rig
  • Visible field
  • Infrared/thermal imagery
  • Videos
  • Laser point cloud
  • Surface constraints: imported from 3rd party or automatically detected using AI
  • Metadata file import
  • EXIF
Calibration / Aerotriangulation (AT)
  • Automatic calibration / AT / bundle adjustment
  • Parallelization ability on ContextCapture, ContextCapture Center and ContextCapture Cloud Processing Service
  • Project size limitation
  • Control points management
  • Block management for large AT
  • Quality report
  • Laserscan/photo automatic registration
  • Splats display mode
Georeferencing
  • GEOCS management
  • Georeferencing of generated results
  • QR-Codes, April tags, and Chili tags: Ground control points automation
Scalability
  • Tiling
  • Parallel processing possible on two computers
Computation
  • GPU based
  • Multi-GPU processing based on Vulkan (optional)
  • Background processing
  • Scripting language support / SDK
Editing
  • Touch-up capabilities (export/reimport of OBJ/DGN)
    • 3D Mesh and orthophoto integrated touch-up capabilities
    • 3D mesh and orthophoto touch-up capabilities through third party application
  • Orthophoto visualization
  • DEM / DSM visualization
  • DTM extraction
  • Cross-sections
  • Contour lines (with Scalable Terrain Model)
  • Point cloud filtering and classification
  • Modeling feature
  • Support of streamed reality meshes
  • Create scalable mesh from terrain data
  • Volume calculation
Output and Interoperability
  • Multiresolution mesh (3MX, 3SM and Cesium 3D Tiles)
  • Bentley DGN (mesh element)
  • 3D CAD Neutral formats (OBJ, FBX)
  • KML export (mesh)
  • Esri I3S / I3P
  • Other 3D GIS formats (SpacEyes, LOD Tree, OSGB)
  • 3D PDF
  • AT result export (camera calibration and photo poses)
  • DEM / DSM generation
  • True orthophoto generation
  • Blockwise color equalization
  • Point cloud (LAS, LAZ, and POD)
  • Input data resolution texture mode
  • AT quality report
  • Animations (fly-through video generation)
  • QR code: 3D spatial registration of assets
Viewing
  • Free ContextCapture Viewer
  • Web viewing
Measurement and Analysis
  • Distances and positions
  • Volumes and surfaces
  • Input data resolution
  • Photo-navigation tool
Bentley CONNECT
  • Upload to ProjectWise ContextShare
  • Reality mesh streaming from ProjectWise ContextShare
  • Associate to CONNECT project
  • CONNECT Advisor

ProjectWise ContextShare

ProjectWise ContextShare is a cloud service for storing, managing, and sharing reality data. Better collaborate when you share visuals of the 3D reality mesh with your teams. ContextShare, a cloud-based service, extends Bentley’s ProjectWise connected data environment to securely manage, share, and stream reality meshes, and their input sources, across project teams and applications increasing team productivity and collaboration. It enables you to stream large amounts of reality modeling data without the need for high-end hardware or complex IT infrastructure. ContextShare is accessible through Bentley’s software applications, such as MicroStation, Bentley Descartes, and much more.  UAV companies, surveying, and engineering firms leveraging reality modeling in-house can quickly access their 3D reality meshes generated with the ContextCapture.

ProjectWise ContextShare Technical Capabilities

  • Annotate reality meshes with many types of information
    Improve collaboration and communication with the annotation feature. Add value to your reality meshes by creating annotations with information, such as a name, a description, and clickable links to online resources.
  • Collaborate in real time
    Capture and share real-time changes; collaborate on latest documents anywhere and anytime over mobile and desktop devices.
  • Connect project participants through an instant-on cloud service
    Use a secure cloud-based portal to work from any location and gain built-in data backup and recovery, without needing to install or maintain any software. Connect your entire supply chain with ease, using the secure platform to efficiently collaborate without opening your firewall.
  • Find documents quickly
    Access your latest files and favorite content based on your requirements for accurate document retrieval.
  • Employ trusted file sharing
    Eliminate the redundancy and confusion often caused by documents stored on multiple sites and with different applications.
  • Put files in a project context
    Create and manage file sharing with user, workgroup, and team folder structures.
ContextCapture and ContextCapture Editor

ContextCapture and ContextCapture Editor

Produce 3D models of existing conditions for infrastructure projects, derived from photographs and/or point clouds. These highly detailed, 3D reality meshes provide precise real-world context for design, construction, and operations decisions throughout the lifecycle of a project. Develop precise reality meshes affordably with less investment of time and resources in specialized acquisition devices and associated training. You can easily produce 3D models using photos taken with an ordinary camera and/or LiDAR point clouds captured with a laser scanner, resulting in fine details, sharp edges, and geometric accuracy.  Dramatically reduce processing time with the ability to run two ContextCapture instances in parallel on a single project.

Extend your capabilities to extract value from reality modeling data with ContextCapture Editor, a 3D CAD module for editing and analyzing reality data, included with ContextCapture. ContextCapture Editor enables fast and easy manipulation of meshes of any scale as well as the generation of cross sections, extraction of ground and breaklines, and production of orthophotos, 3D PDFs, and iModels. You can integrate your meshes with GIS and engineering data to enable the intuitive search, navigation, visualization, and animation of that information within the visual context of the mesh to quickly and efficiently support the design process.

ContextCapture Technical Capabilities

Input
  • Multiple camera project management
  • Multi-camera rig
  • Visible field
  • Infrared/thermal imagery
  • Videos
  • Laser point cloud
  • Surface constraints: imported from 3rd party or automatically detected using AI
  • Metadata file import
  • EXIF
Calibration / Aerotriangulation (AT)
  • Automatic calibration / AT / bundle adjustment
  • Parallelization ability on ContextCapture, ContextCapture Center and ContextCapture Cloud Processing Service
  • Project size limitation
  • Control points management
  • Block management for large AT
  • Quality report
  • Laserscan/photo automatic registration
  • Splats display mode
Georeferencing
  • GEOCS management
  • Georeferencing of generated results
  • QR-Codes, April tags, and Chili tags: Ground control points automation
Scalability
  • Tiling
  • Parallel processing possible on two computers
Computation
  • GPU based
  • Multi-GPU processing based on Vulkan (optional)
  • Background processing
  • Scripting language support / SDK
Editing
  • Touch-up capabilities (export/reimport of OBJ/DGN)
    • 3D Mesh and orthophoto integrated touch-up capabilities
    • 3D mesh and orthophoto touch-up capabilities through third party application
  • Orthophoto visualization
  • DEM / DSM visualization
  • DTM extraction
  • Cross-sections
  • Contour lines (with Scalable Terrain Model)
  • Point cloud filtering and classification
  • Modeling feature
  • Support of streamed reality meshes
  • Create scalable mesh from terrain data
  • Volume calculation
Output and Interoperability
  • Multiresolution mesh (3MX, 3SM and Cesium 3D Tiles)
  • Bentley DGN (mesh element)
  • 3D CAD Neutral formats (OBJ, FBX)
  • KML export (mesh)
  • Esri I3S / I3P
  • Other 3D GIS formats (SpacEyes, LOD Tree, OSGB)
  • 3D PDF
  • AT result export (camera calibration and photo poses)
  • DEM / DSM generation
  • True orthophoto generation
  • Blockwise color equalization
  • Point cloud (LAS, LAZ, and POD)
  • Input data resolution texture mode
  • AT quality report
  • Animations (fly-through video generation)
  • QR code: 3D spatial registration of assets
Viewing
  • Free ContextCapture Viewer
  • Web viewing
Measurement and Analysis
  • Distances and positions
  • Volumes and surfaces
  • Input data resolution
  • Photo-navigation tool
Bentley CONNECT
  • Upload to ProjectWise ContextShare
  • Reality mesh streaming from ProjectWise ContextShare
  • Associate to CONNECT project
  • CONNECT Advisor
ProjectWise ContextShare

ProjectWise ContextShare

ProjectWise ContextShare is a cloud service for storing, managing, and sharing reality data. Better collaborate when you share visuals of the 3D reality mesh with your teams. ContextShare, a cloud-based service, extends Bentley’s ProjectWise connected data environment to securely manage, share, and stream reality meshes, and their input sources, across project teams and applications increasing team productivity and collaboration. It enables you to stream large amounts of reality modeling data without the need for high-end hardware or complex IT infrastructure. ContextShare is accessible through Bentley’s software applications, such as MicroStation, Bentley Descartes, and much more.  UAV companies, surveying, and engineering firms leveraging reality modeling in-house can quickly access their 3D reality meshes generated with the ContextCapture.

ProjectWise ContextShare Technical Capabilities

  • Annotate reality meshes with many types of information
    Improve collaboration and communication with the annotation feature. Add value to your reality meshes by creating annotations with information, such as a name, a description, and clickable links to online resources.
  • Collaborate in real time
    Capture and share real-time changes; collaborate on latest documents anywhere and anytime over mobile and desktop devices.
  • Connect project participants through an instant-on cloud service
    Use a secure cloud-based portal to work from any location and gain built-in data backup and recovery, without needing to install or maintain any software. Connect your entire supply chain with ease, using the secure platform to efficiently collaborate without opening your firewall.
  • Find documents quickly
    Access your latest files and favorite content based on your requirements for accurate document retrieval.
  • Employ trusted file sharing
    Eliminate the redundancy and confusion often caused by documents stored on multiple sites and with different applications.
  • Put files in a project context
    Create and manage file sharing with user, workgroup, and team folder structures.
Virtuoso Subscription

Stay nimble and lower costs

We’ve bundled a 12-month license for trusted Bentley software with customizable training from experts and call it our Virtuoso Subscription. With lower upfront costs and flexible support options, businesses of all sizes can now compete with the industry’s heavy hitters.

LEARN MORE

 

icon of two people connected

Featured Training

Browse a variety of upcoming training and previously recorded courses taught by our in-house, industry experts.

View Options
icon of person in front of computer

Webinars

Explore our Reality Modeling webinars for best practices and engage with Virtuosity and Bentley industry experts.

Watch Now
blog

Blogs

Read our Infrastructure Insights blog to find tips and tricks and Reality Modeling user success stories from around the world.

Read More

Frequently Asked Questions

What is the Reality Modeling WorkSuite?

The Reality Modeling WorkSuite offers surveyors and engineers access to Bentley’s most popular reality modeling applications ContextCapture, ContextCapture Editor, and ProjectWise ContextShare, at a discounted price. Thousands of users worldwide trust Bentley’s reality modeling solutions to provide real-world digital context to their mapping, design, construction, inspection, and asset management projects.

How much does ContextCapture cost?

ContextCapture is included as a bundle with Virtuosity’s Reality Modeling WorkSuite. A practitioner license of Reality Modeling WorkSuite costs $3,902. Prices vary per region. While there are various types of licensing available, a common choice is the 12-month practitioner license offered through Virtuosity, Bentley’s eCommerce store. When you purchase through Virtuosity you get a Virtuoso Subscription. This means you get the software and “Keys” (tokens) to redeem for customizable training, mentoring, and consulting services.

Input data
Can you mix photos from different sources at different resolutions? E.g. air photos with photos from the ground?

Yes. ContextCapture is the most versatile solution on the market, and automatically extracts details from any resolution photos. You can properly register using the control points in your datasets.

What about panoramas? Can they be used?

Yes. This is possible by making sure there is more than 60% overlap between 2 successive source photos from the camera used to take the panorama shots.

Is it possible to use 360 cameras, such as the NCtech iris360?

The most popular fisheye cameras (GoPro, DJI…) are supported and in the catalog of cameras. 360 cameras are not recommended for this kind of process as they come with too much distortion.

Is it possible to use RAW photos (14bits, 16bits, HDR?)

Yes. Most of the camera manufacturers’ RAW formats are supported, but currently only the 8-bit channel is used.

Possible to make 3D from video files?

Yes. ContextCapture accepts videos as an input in MP4/WMV/AVI/MOV/MPEG formats, and automatically extracts a frame according to a user-defined period.

Can it create models with images fully edited and modified in Photoshop/other programs or does it need the raw image files/info unedited?

No. Photo editing will confuse the software with regard to the estimation of the optical properties. However, masks can be used if a fixed area has to be removed from the photos.

Can I import calibration parameters for my camera?

Yes. You can import an OPT file, or add a camera to your database and input its specific calibration parameters, such as distortion parameters, principal point, and focal length.

Data acquisition
What is the recommended overlap between photos to achieve better accuracy?

We recommend that every part of the scene is captured in at least 3 neighboring photographs. The overlap must then be more than 60% in all directions.

What are the requirements of the camera / photos – calibrated, fixed lens etc.?

We recommend using a camera with a reasonable sensor size (in pixels) to reduce the number of photos, and with a fixed focal length. Adjusting the zoom– will confuse the software, as it will have to estimate optical properties for every photo instead of for the entire photogroup (a group of photos with the same optical characteristics).

Do you need surveyed ground control?

Ground Control Points (GCP) are not mandatory but highly recommended to accurately georeference the model, correct drift on corridor acquisitions, increase precision on the altitude, register different datasets in the same project, etc.

Should all photos have geotags, or can the software manage with be only one?

A few geotags should be enough, but most of the time, all photos acquired with a camera equipped with a GPS sensor, will have geotags in their EXIF attributes.

Are photo ID required to stitch together the photos?

There is no need for a structured acquisition if this is the question. Photos may then be shot in any order and will be processed regardless of their IDs. Anyway, having metadata like the Exterior Orientation (EO) and even the Interior Orientation (IO) for every photo will help the software during the Aerial Triangulation process (AT).

What is the workflow for flying a site to capture images? I’ve flown a few from about 30meters and wonder how to also capture the vertical faces of buildings. Can I fly a second flight path and incorporate those images into my model easily?

We offer a comprehensive acquisition guide that describes all the best practices to acquire photos for a specific purpose. It is possible to acquire verticals through a first flight, and then acquire oblique and add both to the same project.

Does ContextCapture have an additional application for flight planning? Specifically, a drone?

Not for now. We believe that our job is to provide the best processing software, and we cooperate with major UAV manufacturers who already have their own mission planning solutions.

How are you handling the obliquity in the image samples?

The software processes all images regardless of orientation, including oblique imagery, in the same way.

Does the camera mounted to drone need to have a known coordinate or geolocation in order to represent true scale and location?

There are several ways to scale and georeference a model: through geotags with the photos, ground control points, or by adding manual tie points in the photos (only for the scale in this case).

How does the software handle the background that is not relevant during the imagery acquisition?

Every static part of the scene that is sharp in appearance, and is not too reflective, will be accurately reconstructed. If a user wants to focus on a specific area, then the best is to create a Region of Interest (ROI) thanks to the dedicated UI.

3D mesh edition
Are the 3D meshes produced fully editable or do they work like fixed blocks?

ContextCapture produces meshes in various formats and structures. They may come as individual tiles at a particular resolution, in OBJ, FBX, Collada or STL formats, or as a multiresolution mesh, in 3SM, 3MX (Bentley products), OSGB (OpenSceneGraph), LoD Tree, I3S (Esri) and 3DTiles (web viewing) format. The mesh can be edited in OBJ (geometry+ texture) with a third-party application and then re-imported in ContextCapture to re-generate LoDs. An entire 3MX model, at any scale, can be loaded in MicroStation as a reference, and used for design and engineering processes.

Can landmarks inside the 3D mesh be selected and manipulated?

Reality meshes in 3Dtiles format can be manipulated, classified and annotated in MicroStation, and Bentley RealityData WebViewer

Accuracy
What is the accuracy of such models?

The global accuracy is about 1-2 pixels (resolution=projected size of a pixel on the scene, also called Ground Sampling Distance for aerial acquisitions) in a plane perpendicular to the acquisition, and 1-3 pixels along the main acquisition direction. Our recent benchmarks show the global accuracy is close to LiDAR, as long as you have a high enough pixel resolution.

Can accuracy be improved if the camera positions can be surveyed accurately?

Survey points help to accurately georeference the model in latitude/longitude/altitude and also avoid drift through large areas, and thus increase the global accuracy of a model.

Do you have a recommended list of cameras and types of photos to use to get a certain level of accuracy?

Every acquisition process may benefit from a specific camera system.  However, all cameras, from a mere smartphone to highly specialized aerial multidirectional camera systems, are supported by ContextCapture. What is important is the resolution (projected size of pixels on the scene), the sharpness of the photos, and a fixed focal length.

Are users able to add reflective dot targets to scenes to enhance accuracy?

Reflective dots may help the software on uniform areas (a uniform white wall, for instance), where computer vision and photogrammetry algorithms will struggle.  This is not required on areas containing enough level of texturing.

Output
Can 3MX and 3SM be exported to LumenRT?

Definitely! This will help you to enliven any captured context in minutes. 3SM will be a more optimized format for such a purpose.

Is there an option to export to CityGML?

Urban area models generated by ContextCapture are assimilated to LOD3 CityGM but do not come with any semantics.

Can these models be integrated with a 3D scanner? (Leica P40)

Georeferenced 3D models can be overlaid with laser scans.  This will provide users to get the best of both worlds, either to complete a LiDAR acquisition with photos, or to extract more details and increase precision for the same area.

Integration with Google Earth?

ContextCapture produces multiresolution meshes in KML format, which can be directly loaded into Google Earth

What GIS software are ContextCapture outputs compatible with?

OpenCities Map, Esri’s ArcGIS, SuperMap, and more generally, any 3D GIS or visualization software compatible with a multiresolution-tiled format (OpenCities Planner, Unity 3D, OpenSceneGraph, Eternix’ BlazeTerra, etc.).

Which Bentley software accepts the exported 3D mesh?

All V8i SS4, CONNECT and DGNDB platform compatible products will support the 3MX format. Descartes, Map, ABD, OpenRoads ConceptStation, etc.

Is there a list of compatible 3D printers?

ContextCapture can export an STL or OBJ format, widely accepted by 3D printers.

Do you have an idea of a file size? For example, if I had 200 pictures at 25Megapixels each.

This is about 150MB for a similar project. 100 times lighter than a colored LAS point cloud and 22 times lighter than POD.

Is it possible to export only a colored point cloud?

Yes. ContextCapture can produce models in LAS and POD point cloud formats.

Where can I get the list of 3rd party software that we can use after, for example, the water simulation etc.?

For communication purposes, LumenRT will do the job quite nicely. For more technical analysis, OpenFlow brand will be optimized.

Can Bentley’s design tools use the mesh as a surface or terrain, like in power inroads?

Definitely! OpenRoads ConceptStation loads a 3MX as a 3D base map. Descartes can also be used to extract a DTM from our digital surface model.

Does referencing 3mx files in MicroStation use the standard reference functionality, or does it use raster manager functionality and is this fully integrated in ProjectWise?

MicroStation loads the Spatial Reference System (SRS) used to produce the 3MX file and references the model accordingly. We are working on closer integration with ProjectWise.

How to combine points generated by scanner and photos?

In ContextCapture, there is an “Adjust photos onto Pointcloud” feature that automatically runs alignment before 3D-reconstruction.

Is it possible to place created 3D models into different geographical zones?

The Spatial Reference System (SRS) can be selected when creating the project and in the viewer (after the production) from a library of 4,000+. This can be a user-defined referential system.

Web Viewer
How can I publish the models to my customers?

Bentley RealityData web-viewer is the most suited path to share with stakeholders. It will display 3DTiles hosted on ProjectWise ContextShare, allowing photo-navigation, annotation, and permission management. ContextCapture also offers a free web-compliant plugin web viewer. This allows you to publish any 3MX model, at any scale, on a web server, making it available to your users via a mere browser.

For the WebGL Viewer, is it possible to take into account the collision?

Clash detection can be done in MicroStation (on extracts of the mesh), or on point clouds in various solutions but not in a viewer, either web or desktop.

Processing and analysis
How automatic is the process of processing multiple photos?

This is fully automatic as far as the input datasets are suitable (overlap, sharpness, optical properties, etc.).

Regarding photo control does the software prepare an AT solution report.

Yes. There is a report at the end of the AT, containing the various RMS values as well as processing parameters. Quality metrics are also viewable in the 3DView.

When you say “Production time” is that clock time? Or CPU processing time?

The production time is considered as clock time. The average observed production speed is about 15 Gpix per Engine per day.

How do you georeference the mesh in ContextCapture?

Either through control points or geotags with the photos.

Is it possible to georeference model by GPS points captured by a camera?

Yes. Indeed, ContextCapture reads the EXIF parameters coming with the photos and extracts the geotags when present, as well as other camera properties, if any.

How do you add control to photos?

Through a dedicated UI in the software. You can also load them through a text file and then identify them in your photos.

How do you scale the contents?

Either by georeferencing the model, or by adding manual tie points with a distance value in the editor.

Is there a function to manually classify the 3D point cloud for 3rd party survey?

Yes, Bentley Descartes and/or Pointools are dedicated to this type of application.

Do you have object classification tools?

ContextCapture Insights is able to perform such operations. This is made possible by properly training detectors.

Do you have direct volume calculations in ContextCapture?

Yes, using the 3D viewer which will be included with ContextCapture. Users can measure coordinates, distances, height differences, areas, and volumes. In the web viewer, only coordinates, distances, and height differences can be measured.

How do you calculate volumes?

Volumes are calculated by refencing either a mean plane created through the georeferenced selection polygon or to a custom plane at a specific height.

Are there size limits in terms of MB/GB & poly count?

No. This is what makes ContextCapture so unique. City or bridge models can easily reach dozens of GB on a hard disk and be streamed through a web or local server, thanks to the multiresolution architecture and the optimization of the mesh.

Are you going to simplify the process for producing orthophoto according to different axes?

This is available in ContextCapture Editor.

What are the greatest challenges users may encounter in the production of 3D models with ContextCapture?

Photo acquisition! The process is truly straightforward when the photos are appropriate: resolution, sharpness, overlap.

Is there a method for using this software inside?

The software applies to any photo dataset, aerial, ground, outdoor, indoor, so long as the objects in the scene are static (if moving too much they will be automatically removed), highly reflective or too uniform. The best practice for shooting photos indoors, is to walk sideways, back to a wall and shoot photos in multiple directions to the front (slightly upwards, downwards, rightwards, leftwards). The acquisition guide provides more information on this procedure.

Systems Requirements

 

ContextCapture
Minimum Hardware

At least 8 GB of RAM and NVIDIA or AMD graphics card, or Intel integrated graphics processor compatible with OpenGL 3.2 with
at least 1 GB of dedicated memory.

Recommended Hardware

Microsoft Windows 7/8/10
Professional 64-bit running on a PC with at least 64 GB of RAM, an Intel I9, 4+ Cores, 4.0+GHz CPU. Hyper-threading should be enabled. Nvidia GeForce RTX2080/2080Ti GPU. Data should preferably be stored on fast storage.

Memory

4 GB minimum

Hard Disk

2 GB free disk space.

Video

NVIDIA or AMD graphics card, or Intel-integrated graphics processor compatible with OpenGL 3.2.

Screen Resolution

1024 x 768 or higher

Check out Webinars

View Now

Buy Reality Modeling WorkSuite