Experimenting with various levels of abstraction trying to focus on interfaces.
Magically aligning to the flow of the week, I had certain independent tasks I had to focus on at work and projects I work in
As mentioned in week0: setup - following my development for diploma in IAAC, I have been participating in development of a tool for designers and digital fabrication enthusiasts - HoloFab. HoloFab is a combination of Grasshopper 3D plugin and an device Interface enabled with Augmented Reality. At the moment we added support for Android AR-Core enabled devices and HoloLens. The idea behind the project is to abstract away communication and data handling and generalize communication logic such that users don’t have to worry about what happens behind the hood, while keeping it opensource so one can customize it for own needs. Project is written on C# with Unity on AR interface side and with C# custom components for Grasshopper.
The project is open-source and in beta stage of development. One of the focuses for us was to try to generalize the logic and separate it from implementation on the platforms. For instance, network communication on apps for Universal Windows Platform - the system functionality readily available on normal standalone apps is trimmed. Same with Threading. What we did is to define methods for both platforms that are than individually implemented. This way depending on the platform actual Unity or grasshopper components are indirectly interfacing those methods. Thus you only need to worry how the data is serialized (encoded) and to set up structure to send and receive those data if the onew we implemented are not sufficient and you want to extend from the tool.
Similar problems appear with interaction logic: on the phone you can have more traditional layouts. Moreover, touch at least considering one finger touch, behaves as a mouse - it has coordinate in screen coordinates that can be processed in same way as mouse to select items. However, in Hololens you have gestures to interact with the environment but, at least in Hololens 1, you don’t get actual coordinates of the hand - only relative shifts and basic states (tapped open, ready to tap, etc.). This makes the logic of, for example, dragging - different, because your hand relative movement from gesture start - is directly translated. So in the program we are implementing the Interaction Manager taking care of catching corresponding events and finding necessary corrections to the system.
Moreover we were looking into closing the loop and not using this as fancy visualization system but communicate data from interface and from scanned environment back to the user. For example, we added option to measure certain points relative to the coordinate system of the virtual environment. Next steps would be to actually communicate this data back to the user. As a proof of concept for the moment we added an option to send some UI elements such as sliders, counters and toggles so that you can directly control your parametric model from the phone or glasses.
If you find the project interesting, let us know on our site.
Another small experiment that came up over this week is a tool to randomly populate meshes in Unity. The idea being that overall outline is important but the quantity does not allow for manual placement, nor is precise location is relevant. Thus I developed a small script to populate surface.
To populate surface I defined following steps:
map face surface areas to have weighted influence on random point generation (using this hint for finding distance from point to line for area evaluation);
evaluate total surface of a 3D mesh;
stack faces up in arbitrary sequence to 0 to 1 scale based on their area relative to total area;
when generating I generate actually three random numbers per point:
first 0 to 1 - to select a face based on the scale we defined before;
and two more 0 to 1: to select random factors on two of the triangle sides so that combined they create some point on a complimentary parallelogram to the face triangle - and reflecting the point back, if it happens to end up out of the actual face (a trick inspired by this blogpost);
populate the generated location with the prefab.
Having this as a pseudo-code I can draft out the code. And after fixing minor problems with recovering indices of step 4-1 the script produced the desired behavior like a charm!
''' C#
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
//[RequireComponent(typeof(MeshFilter))]
public class PopulateMesh : MonoBehaviour {
[Tooltip("Count of items to generate.")]
[Range(1, 10000)]
public int count = 1000;
[Tooltip("An example of an object to place.")]
public GameObject goExample;
Transform generatedObjectHolder;
[ContextMenu("Populate Mesh Surface")]
public void PopulateSurface(){
ClearGeneratedObjects();
MeshFilter[] meshFilters = GetComponentsInChildren<MeshFilter>();
if ((meshFilters == null) || (meshFilters.Length == 0)) return;
CombineInstance[] combine = new CombineInstance[meshFilters.Length];
for (int i = 0; i < meshFilters.Length; i++) {
combine[i].mesh = meshFilters[i].sharedMesh;
combine[i].transform = meshFilters[i].transform.localToWorldMatrix;
}
Mesh mesh = new Mesh();
mesh.CombineMeshes(combine);
Vector3[] vertices = mesh.vertices;
int[] triangles = mesh.triangles;
////////////////////////////////////////////////////////////////////////
// Prepare: evaluate face areas.
float totalArea = 0f;
List<float> faceAreas = new List<float>();
float area = 0f;
int vertexIndex0 = 0, vertexIndex1 = 0, vertexIndex2 = 0;
Vector3 direction;
float sideLength = 0f;
float height = 0f;
for (int i = 0; i < triangles.Length; i += 3) {
vertexIndex0 = triangles[i];
vertexIndex1 = triangles[i + 1];
vertexIndex2 = triangles[i + 2];
direction = vertices[vertexIndex1] - vertices[vertexIndex0];
sideLength = direction.magnitude;
height = PointDistance(vertices[vertexIndex0], direction, vertices[vertexIndex2]);
area = sideLength * height / 2.0f;
totalArea += area;
faceAreas.Add(area);
}
Debug.Log("Populating surface. Calculated total area: " + totalArea);
////////////////////////////////////////////////////////////////////////
GameObject goHolder = new GameObject("GeneratedObjectHolder");
goHolder.transform.SetParent(transform);
this.generatedObjectHolder = goHolder.transform;
// Populate Surface.
List<Vector3> points = new List<Vector3>();
Vector3 direction0, direction1;
float randomFactor0 = 0f, randomFactor1 = 0f;
int index = 0;
Vector3 currentPoint;
GameObject goItem;
while (points.Count < this.count) {
// Choose a random face
randomFactor0 = Random.Range(0f, 1f);
index = FindFace(faceAreas, totalArea, randomFactor0);
vertexIndex0 = triangles[index*3];
vertexIndex1 = triangles[index*3 + 1];
vertexIndex2 = triangles[index*3 + 2];
// Get randomly scaled vectors of the sides.
direction0 = vertices[vertexIndex1] - vertices[vertexIndex0];
randomFactor0 = Random.Range(0f, 1f);
direction1 = vertices[vertexIndex2] - vertices[vertexIndex0];
randomFactor1 = Random.Range(0f, 1f);
// Find a point in the face from the two scaled sides.
if (randomFactor0 + randomFactor1 > 1f) {
// Flip factors, if point out of triangle
randomFactor0 = 1f - randomFactor0;
randomFactor1 = 1f - randomFactor1;
}
currentPoint = vertices[vertexIndex0]
+ direction0 * randomFactor0
+ direction1 * randomFactor1;
// Store generated point.
points.Add(currentPoint);
// Populate goExample
if (this.goExample != null) {
goItem = Instantiate(this.goExample, this.generatedObjectHolder);
goItem.transform.localPosition = currentPoint;
}
}
}
[ContextMenu("Clear generated items")]
public void ClearGeneratedObjects(){
if (this.generatedObjectHolder != null)
DestroyImmediate(this.generatedObjectHolder.gameObject);
}
////////////////////////////////////////////////////////////////////////////
// Calculate distance from point to a mathematical line.
private float PointDistance(Vector3 linePoint, Vector3 lineVector, Vector3 point) {
return (point - PointProject(linePoint, lineVector, point)).magnitude;
}
// {Project a point onto a mathematical line.
private Vector3 PointProject(Vector3 linePoint, Vector3 lineVector, Vector3 point) {
lineVector.Normalize();
Vector3 relativeVector = point - linePoint;
float projectionLength = Vector3.Dot(relativeVector, lineVector);
return linePoint + lineVector * projectionLength;
}
// Find an index of an item just above the given factor in a given list of factors.
private int FindFace(List<float> areas, float totalArea, float factor){
float currentFactor = 0f;
int i = 0;
while ((currentFactor <= factor) && (i < areas.Count)) {
currentFactor += areas[i] / totalArea;
i++;
}
i--;
//Debug.Log("For factor: " + factor + ", index found: " + i);
return i;
}
}
'''
Future steps include populating mesh volume.
The logic behind populating volume is similar, but requires rather complicated additional step of tessellating the mesh into pyramids, generating point within them instead of the faces themselves. Additionally, it is necessary to additionally check if point is within the volume to discard some of concave artifacts. This algorithm is not as perfect since not every generated point is useful and some extra steps have to happen depending on the complexity of the mesh.
// A side note: using this forum I adapted my Unity to work with Atom editor by changing Edit > Prefences > External Tools:
-n "$(ProjectPath)" "$(File):$(Line)"
Moving on to some experiments dedicated for the FabAcademy, it was curious for me to test out communication between the Unity, I use on daily basis, and an Arduino, and thus potential physical interfaces. This forum and this one has been very useful in this investigation
As a first test I went ahead to directly establish communication with Serial Communication.
Unity conveniently has a well defined interface for Serial communication. What was remaining to do is create a C# script interfacing with it and create a demo User Interface for it.
This script is the Behavior for interfacing the Serial Library:
''' C#
'''