Documentation (WIP)

Introduction

The plugin is used for scattering large amounts of objects over an area according to parameters. There is always one target object, and the area over which to scatter is specified by a texture. If you want to scatter over multiple objects, you need to add a new plugin for each target into the scene. The scattering is generated in groups. Each group can have different parameters and different models. The groups are generated sequentially, one after another, and all new models are only placed in empty space, they do not collide with previous groups. After the scattering is finished, the user can delete some parts of it, and regenerate the scattering on the empty space. The scattering can also be converted to a single mesh or individual instances. Therefore, it can be used for example with Modifiers or other native tools just like any other 3DS Max object. You can also choose to scatter procedural objects, particle sources or lights, which do not have a geometry, and specify different objects as their collision mesh.

Adding plugin into scene and selecting the Target

First you need to add the Plugin object into the scene. The plugin object is created by CLICK+DRAG, and there is a 2D icon to show where the plugin has been added. Before the scattering, you have to select the Target Mesh. Click on the Select Target button, and afterwards click on some object in the 3D Viewport. The target has to be an object which can be converted to a triangle mesh, otherwise it will be ignored. The text field in the Target tab gets filled with target object name. If the target node is deleted, the plugin automatically changes its target object to none, and generation cannot proceed.

Setting the Parameters

Before you can generate the scattering, you need to set up the parameters. The main parameters of the scattering are the following:

Models Selection

Models to be scattered can be chosen from prepared models loaded in the application with the plugin, or any other model from the scene, which can be converted to triangle mesh. All these models appear in the selection dialog automatically. Alternatively, you can also scatter objects which cannot be converted to triangle mesh, in case there is another Node in the scene, with the same name, but with "_nscollision" suffix, which can be converted. Such objects also appear in the list - not the _nscollisions, but the original ones. When starting the scattering, you must check the "Use collision objects" checkbox, otherwise these models will be skipped. Models are organized in Sets - in the selection dialog there are automatically added Sets for each layer, container or named selection in the scene, with all the models already being added to the Set. You can therefore easily create your own Set by creating a new layer or named selection and adding objects to them. In the Dialog, you can select which Sets are going to be used for scattering by checking their checkboxes. When clicking on the Set icon, on the right hand side, there appears a probability slider to specify the probability of choosing this Set during the distribution process. Also, there appears a list with all the models in this Set. You can then specify particular models, and their probability of selection when this Set is being used.

Distribution Mask

Scatter is always random over the entire area of the target object. You can specify a bitmap texture to work as a distribution mask. Algorithm then reads the color value at a given point, as a greyscale value, ranging from 0.0 to 1.0, and use this value as a probability to determine if a model should be placed at this point. Areas with value 0.0 will therefore end up empty, areas of value 0.5 half-empty etc. It is recommended to use high-resolution textures for the precision.

Model Scale

This speficies how much will the scattered object be scaled. It is a range, and the actual scale is picked randomly between the minimum and maximum. Alternatively, you can again use a bitmap texture, where the greyscale color value represents the actual scale: 0.0 means use the value of the minimum scale, 1.0 maximum scale. Values in between are used for interpolation.

Distance between Models

This sets the minimum distance which the models have between each other. The algorithm tries to reach the minimum, putting one sample after another, but usually the models end up being a little father apart, depending on how well the models fit. The distribution works randomly, so some models placed close to each other can block out an area where no more models fit. However, the distance will never be less than minimum. And the samples will not overlap. The distance can also be a range. For each model placed, the algorithm randomly chooses a value from the range, and assigns it to this model. Any other model then cannot be placed closer to this model than the chosen distance. You can also guide the distance by a bitmap texture: value 0.0 means the minimum of the distance range will be given to placed models, value 1.0 maximum. The algorithm uses two ways for computing the distance between two objects - user can select which one they prefer in the Generation parameters tab.

Sphere Approximation:

Algorithm computes the bounding sphere for each scattered model, and the distance between the models is taken as a distance between their bounding spheres.

Convex hull:

Algorithm computes convex hulls of scattered models, and the distance is taken as a distance between the hulls. This results in a more accurate representation, but the distribution can take longer time.

Different Collision Mesh

When setting up the minimum distance, you can also specify to use a different Collision Mesh for any of the models to be used in the distance calculations (this is a global setting in the Generate parameters, near the Generate button - but it will only be used for models which have the collision mesh object present). Your collision mesh can be smaller or larger than the actual object, to create various effect (lightly overlapping objects when the collision mesh is smaller...). It is also recommended to create a simple geometry collision mesh for highly complex models. After the scattering is finished, the real original objects are rendered - the collision objects are used only to compute the distance. To use a different collision mesh than the scattered object, add a new object into the scene, with the same name as the object you want to scatter, and add suffix "_nscollision". For example, if you want to scatter an object named "Sphere", its collision object must be named "Sphere_nscollision". Then just check the checkbox to use collision objects in the Generate tab, and the mesh will be found automatically. Any object that does not have a collision mesh specified uses itself for collisions, so you do not have to create collision meshes for all objects you want to scatter.

Limit

You can limit number of models scattered in a given group. The limit serves as an upper bound - it might not be reached (if only fewer models fit), but it is guaranteed there will not be more models than the limit.

Random Seed

You can control the random seed used in the generation to randomize the output. Different random seeds result in different results. At the same time, if all parameters are identical (including target model), the same random seed results in exactly the same distribution every time user generates the distribution.

Model Orientation

This specifies the scattered models transformation matrix. Models are always oriented so that the up-axis is aligned with surface normal at the point they are placed. This means they are always on the surface, facing outward from the target object. Then, if you select Random rotation along Z axis, it means that the model is also rotated randomly along its up-axis for an extra variance. Alternatively, you can fix the orientation by specifying a forward direction - and then all models will be aligned to match this. For extra variability, you can also specify a variance of the rotation for left and right of the forward direction. For example when you would like to simulate rain drops in a windy environment, drops flow mostly down, but are not perfectly aligned, they are a little rotated dependent on the wind direction.

Working with Groups

The actual distribution of the models works in groups. Each group can have different parameters and different models. The algorithm then generates group after group, from top to bottom, and each group is aware of the models generated earlier. The minimum distance is always only valid for the currently generated group. This means that if the previous group had larger distance between the models (so there are large gaps between them), the next group can fill these gaps - because the minimum distance can be smaller. You can create new groups, move their ordering, or copy the settings from one group to a new one, and modify only some parameters. There is a checkbox for each group. This tells the algorithm, whether that group should be used during the generation. If the group is not checked, it will be ignored. There is also a button with Lock icon on it. This is used when modifying an existing scattering, and you want to lock a group which you like while change the other groups. If a group is locked, it means its models remain unchanged when user press Generate button again. Even if this group is not selected, the models will not get deleted or changed in any way. Other groups then avoid these placed models just as if they were generated before them. After the generation, all models are displayed in viewport. However, the user can hide some groups from the viewport by clicking on the Eye icon. Only groups with Eye icon turned on are displayed.

Presets

With the plugin we provide some Presets. Each preset specifies number of groups, with pre-filled parameters and models to scatter. You can create your own presets - just press the Save button to save the current setup as a Preset. You can then load any preset by clicking the Load button. Notice though, that if you load a new Preset, either from a file or from the list, all your current settings will be lost.

Generation Setup

As mentioned above, each group has its own distribution parameters. However, there are some common parameters, that are used for all groups during the generation. First parameter is the Distance Computation. To compute a distance between two objects, we either create bounding spheres for the objects, and get the distance as a distance between these spheres. Or we compute convex hull representations, and compute the distance between the convex hulls. This is a more accurate representation for uneven shapes, however, the computation can take longer time. Another parameter is the multi-threading. The scattering can be computed on multiple threads. This results in faster computation, however, some approximations need to be made. For example, the Limit option only serves as an upper bound and it might not be reached even if there is enough space in the area. This is because each thread needs to estimate its own sub-limit - in case some threads would work faster, we do not want some areas to be more densely populated than the others. Single-thread on the other hand distributes all models consecutively, so the area is guaranteed to be populated evenly, and if the limit can be reached, it will be reached. Before starting the algorithm, you can specify to "Use collision objects". This is explained above, in the parameters section, by the distance computation, but a short summary: When this checkbox is checked, for each model which is being scattered, the algorithm tries to search for its "collision object" - an object in the scene with the same name, but with "_nscollision" suffix. (so if you scatter node named "Tree", it looks for "Tree_nscollision"). If it finds such object, then all collisions and distance computations used in the algorithm are done on the collision mesh, instead of the original object. For models which do not have such node (no _nscollision node exists), the algorithm simply uses their own mesh - so you can specify only some collision meshes for some nodes, you do not have to create them all.

Instance Representation

After all the models have been scattered, the plugin still holds the references to the original objects. So if the user changes anything on the original scene object, these changes will be shown on the scattered instances as well. This includes the change of geometry, material or the application of a modifier. It does not include the change of transformation matrix, because the matrix is a part of the node, not the object, and each instance has a transformation matrix of its own.

Rendering

Currently, the plugin can be rendered in 3DS Max built-in renderers, and V-Ray. In V-Ray, it uses instances, so the memory should not increase too much during the rendering. However, some of the built-in renderers internally create a single mesh from all the instances, so for a large count of scattered models, this can cause a memory overflow.

Display in the Viewport

You can change how to view the scattered models in 3DS Max Viewport. Instead of the actual meshes, you can choose to display a placeholder box or a plane at each location. Also, you can limit the percentage of the models shown, as the viewport gets really slow with large amount of models. This is recommended to set up before the generation, if you expect large amounts of scattered models, because the GUI can become non-responsive, if it tries to display too many models in the viewport after the generation is finished.

Statistics

The user can see how many models were generated in each group in the Statistics tab.

Modify Group Distribution

After the generation has finished, you may want to modify the scattering result. To change the distribution, you can delete some parts of it, and regenerate the scattering on the affected triangles (the ones with deleted samples). There are three tools for model deletion. All of them work in the viewport, on models visible in the viewport. So if you want to delete models from a specific group only, you can hide all the other groups (by un-checking the Eye icon in the group list).

  1. First, you can start deleting rectangular areas by clicking the Delete Area button. This switches the plugin into the delete mode, and you can click and drag a rectangle over the viewport to delete models within that rectangle. You can click and drag multiple times, and if you mis-click, you can use the Undo button underneath. You can end the mode by either clicking on the Finish button, or right mouse click.
  2. You can also click on the Delete Model button, and then select individual models in the viewport. Again, you dismiss the delete mode by right mouse click or pressing the Finish button.
  3. The last option is to click the Delete by Brush button. Afterwards, click and hold the mouse in the viewport and drag it across the scattered models. This deletes all models in the mouse path as using a brush. The width of the brush must be set beforehand. This method can be laggy in case a large amount of models are displayed in the viewport, because the viewport needs to be redrawn every brush move, so it is recommended to brush slowly.
    For all modes, there is also the Undo button to undo the last selection. Undo only works while the user is in the delete mode. After pressing Generate, or Regenerate Area, all previous changes are confirmed, and cannot be undone anymore. Every group that has been affected by the erasing also automatically becomes "Frozen" (see chapter Groups above). The user can therefore also choose to generate the whole distribution again (pressing Generate), and add new groups to fill the new gaps. New models will be scattered over the entire area again (not only triangles affected by deleting), but the Frozen groups will remain unchanged.

Convert / Export

After you are satisfied with the generated scattering, you can convert the entire NeatScatter object to a simple Mesh. This is not recommended for large amounts of models. You can select "Group by Groups" to create one mesh for each group. Only groups which are visible (have the Eye icon on) will be exported. Or you can select "Group by Instances" to create one mesh for each scattered model. Therefore, all instances of one model will be in one mesh. If you have selected both, it creates one mesh for each model type for each group. The plugin can also convert all scattered instances into native 3DS Max instances. However, the process can take a longer time, because each instance is a new scene node, and adding a node into the scene is not fast. The objects themselves or their meshes do not get copied, they are just instanced, but still, large amount of nodes can cause 3DS Max to get very slow, so it is recommended to try exporting group after group (hiding the other groups during the conversion) if your scattering is too large (> 20000 instances). Again, during the conversion, you can select the grouping, but this only affects how the instanced nodes are put into the layer structures. You can also export the scattering as a separate text file. This can be used for further processing in a different program, or just to export the scattering information in a basic form.

The file specification is the following. Each model is on one line. Each line looks like this "path_to_model_file";"model_name";1;0;0;0;0;1;0;0;0;0;1;0;0;0;0;1

  1. Path to .max file the model was taken from. This is empty if model from scene was used.
  2. Model name.
  3. 12 numbers - transformation matrix in row form. Example: "C:\Program Files\Studio D76\PluginMax2016\models/data\kapky_all.max";"3";0.425356;- 0.975842;0;0;0.975842;0.425356;0;0;0;0;1.06452;0;-7.98129;-11.4014;0;1 The grouping is not supported for the export to file yet.