In Blender, having greebles consisting of multiple loose parts can pose a problem when trying to warp them into shape of an arbitrary surface using modifiers such as Curve, Lattice or ShrinkWrap. The result pictured in the back of the scene above shows how loose parts tend to detach from the surface and the final output completely misses the artist’s intent. One solution that I came up with in my projects is to separate the loose parts into individual objects and then merge them together using Boolean Union operator. The addon union_loose_parts.py which you will find in the following repository: https://github.com/sadaszewski/blender-addons does just that and throws triangulation on top of it all. What you’ll usually need to do is apply simple (or Catmull-Clark) surface subdivision and you’ll be in good shape to do the warping. The mesh in front of the scene above illustrates output of such a workflow. Enjoy!
This subject seems to be a recurring question:
As much as I appreciate existing solutions, they do not perform precisely what is requested and therefore can be described rather as workarounds.
What I would like to present in this post is a utility capable of 1) creation of directory tree snapshots in a size-efficient manner, 2) fast comparison of two snapshots revealing files present in only one of them.
This kind of functionality is useful both for tracking of what is going on in your filesystem, as well as (especially if you use command line a lot and/or sometimes tend to lose focus) ensuring that you didn’t accidentally delete any important files. So basically a dream tool for paranoid people like myself who like to exert control over everything 😉
The usage is really straightforward (mind the trailing slashes) and pretty self-explanatory:
python2.7 dirsnap.py --snap /selected/directory/ --out snap1.out.gz ... a few days later ... python2.7 dirsnap.py --snap /selected/directory/ --out snap2.out.gz python2.7 dirsnap.py --compare snap1.out.gz snap2.out.gz
which should normally print to standard output comparison results looking similar to the following:
L /selected/directory/file_only_in_snap1 R /selected/directory/file_only_in_snap2
Where L and R respectively mark file paths found only in the first specified snapshot file and the second specified snapshot file.
There are as well some options allowing not to display Unix-style hidden files and/or limiting depth of search.
If you’re looking to add file date/size comparison, the code is extremely straightforward and consists of only 150 lines. So easy but finally something that does exactly what was needed 😉
Natural Neighbor is an interpolation scheme suitable for scattered data. It is based on weighted average approach and uses Voronoi diagram to determine relative contribution of given data points. Weights are defined as ratio of area “stolen” from known data points in the diagram by adding an interpolated data point divided by the area assigned to the new point.
The following picture contains volumetric rendering of 3D Voronoi diagram for a set of 4 points in a 1003 cube*. Volumes marked yellow, blue, green and orange indicate areas where point closest to all other contained points was respectively point 1, 2, 3 and 4.
Adding one more point to the Voronoi diagram creates new area (marked pink in the picture below). This new region indicates fragment of space where the new point is the closest one to all other points contained within the pink hull.
You can notice that a large chunk of blue area and smaller bits of green and orange areas have been “stolen” by the pink one. This signifies that point 2 will have the biggest weight whereas points 3 and 4 will bring smaller contribution to the final interpolated result for the inserted point.
To interpolate the whole volume the procedure above needs to be repeated for all 1003 points in the volume (or more if we increase the resolution). This process can be very time consuming depending on the final number of evaluations and number of known points.
In 2006 in the paper “Discrete Sibson Interpolation” Sung W. Park outlined a method for approximate natural neighbor interpolation on a pixel/voxel grid using rasterization rather than analytical approach.
His idea boils down to iterating over all points in 2D/3D/N-d hypercube and determining the closest known data point (i.e. the Voronoi cell where the interpolated point lies in a static Voronoi diagram defined by known data points). When this distance is known it determines the radius of a hypersphere centered at the interpolated point which contains only points that are affected by given data point. The set of all interpolated points defines multiple overlapping hyperspheres. For each output point we accumulate value of corresponding data point and number of times given output point is contained within any hypersphere. After evaluating all output points, the expression (accumulated value) / (hypersphere count) approximates for each point the data value interpolated by natural neighbor algorithm.
The exact mechanics of this approach are formally described in the paper. Intuitively speaking each increment of the counter means one more “stolen” pixel (from whichever Voronoi cell). Why? From the definition of a sphere with given radius – all points contained within the hypersphere if added to the diagram would be closer (or at the same distance) to given data point. Effectively by rasterizing a hypersphere centered at given output point we’re saying – look all these rastered points would steal this output point from its original Voronoi cell. Therefore we’re incrementing their “theft” counters by 1 and accumulating corresponding data value in their value accumulators.
On the other hand we notice that no other point would steal that particular output point because again by definition it would be further away from the output point than the original data point. The whole concept is illustrated in the figure below. It’s a great observation and basically amounts to somewhat implicit construction of discrete Voronoi diagram and weighing data values by relative number of stolen pixels.
Performance of this approach stems from the fact that for a large number of data points, analytical construction of Voronoi diagrams becomes increasingly complex so voxel-wise approximation can help. On the other hand as the density of data points increases, the typical radius of hypersphere decreases further reducing the simple sphere rasterization work. Furthermore, this algorithm can be implemented on GPU hardware (e.g. authors discuss their shader implementation).
In this ALGOholic entry, I would like to share a couple of variations on the above algorithm. In the repository below you will find: 1) nn3d2.py – a reasonably streamlined OpenCL version using OpenCL-based KD-Tree to compute nearest neighbors (and their distances) for each voxel and then using OpenCL again to rasterize spheres of given radii at all voxels, accumulating values and counts to separate arrays), 2) nn3d.py – a first approach to OpenCL acceleration which aimed to do on-the-fly nearest neighbor search and sphere rendering on the OpenCL side in one go but proved not to be the preferred way to implement this, 3) nn_discr_ocl.cpp – a completely naive classical CPU-only implementation without even bothering to use KD-Tree which results in hard performance drop for many data points and also has poor rasterization performance when data points are few, therefore working tolerably only for certain range of data point counts. This version is the easiest to read through and understand the algorithm obviously.
In short, the recommended production version is of course nn3d2.py and its example usage could look like this:
from nn3d2 import nn3d from nibabel import nifti1 import numpy as np n = 10 pts = np.random.random((n, 3), dtype=np.float32) values = list(xrange(n)) (accum, cnt, val, radius) = nn3d(pts, values, res=np.ones((3,), dtype=np.int32) * 64) nifti1.save(nifti1.nifti1Image(accum / cnt, np.eye(4)), 'natural.nii') nifti1.save(nifti1.nifti1Image(accum, np.eye(4)), 'accum.nii') nifti1.save(nifti1.nifti1Image(cnt, np.eye(4)), 'cnt.nii') nifti1.save(nifti1.nifti1Image(val, np.eye(4)), 'val.nii') nifti1.save(nifti1.nifti1Image(radius, np.eye(4)), 'radius.nii')
The snippet above will run the NN algorithm on a 64x64x64 grid and should produce 5 volumes containing respectively: 1) natural.nii – results of natural neighbor interpolation of points pts with values values, 2) accum.nii – accumulator value at each voxel, 3) cnt.nii – counter value at each voxel, 4) val.nii – value of nearest input data point for each voxel, 5) radius.nii – distance to nearest input data point for each voxel.
Cross-section images of the final natural neighbor interpolation and its color-mapped 3D volume rendering are presented on the figure below**:
Once again the algorithm above is a well performing approximative implementation of natural neighbor method. It covers easily for interpolation as well as extrapolation of values although extrapolation is limited to weighing between the values of outermost input points rather than using gradients to estimate values outside of the hull defined by input data. Depending on your use-case this might be a useful behavior. You will find the source code in the following GitHub repository as well as a snapshot below. Please enjoy!
* Data (Nifti) generated in MATLAB and rendered using MRIcroGL.
** SPM used to render cross-sections. MRIcroGL used for volume rendering.
Good news for all fans of my minimalist Flow Editor. The software is now open source (under 2-clause BSD license) and available on GitHub under the name flowed. To build it you will need as well: nn-c and akima. They’re used just for interpolation (natural neighbor and bicubic respectively) and can be linked statically. Fortran compiler is required to compile akima. There’s also a Mac OS X version available in the releases section on GitHub: here. Hope you enjoy and maybe develop it further.
I guess the title pretty much reveals the story. Tired of the direction Linux distros are going (e.g. dependency hell, bloat, tons of interdependent packages for doing simplest of things, inconsistent architecture, ridiculous changes like systemd, plague of security holes, etc.) and lusting for some new experience I decided to give the BSD family a try.
Initially I was strongly inclined towards OpenBSD with its legendary focus on security and a bunch of unique features such as out of the box ASLR, W^X, strong entropy in virtually all places that matter (packet identifiers, PIDs, port numbers, inode numbers, etc.), chroot-ed httpd and maybe most importantly (yeah we all remember heart-bleed) LibreSSL. I had my mind set on this idea to the point of writing fakescreen for usage in chroot jails and submitting a patch to OpenBSD’s httpd to support URL rewriting. The installation went really smoothly and I thought I was good to go when I remembered that I have a freaking HP LaserJet P1102 printer to set up in CUPS. Thankfully HPLIP works just fine in OpenBSD and foo2zjs does the job in terms of feeding the right stream to the printer. What went wrong then? After correctly printing the test page I was shocked to see “panic: ehci_device_clear_toggle: queue active” on the screen – clearly a USB problem. Apparently I was not the only one having this issue. As you can see I spotted a pretty promising diff to apply but then I realized – geez – this is exactly what I was NOT looking after in an operating system!!! I mean c’mon – a little bit of maturity – USB stack crashing? seriously? On the brink of 2016? Give me a break. Conciseness, great documentation and uniqueness make OpenBSD a wonderful piece of work and certain bits of code produced by this project (OpenSSH, OpenSSL, LibreSSL, packet filter) benefit the free software ecosystem at large. Nevertheless hardware support issues are still out there even for technologies (USB) introduced nearly 20 years ago running on hardware manufactured a couple of years ago. Sorry to say that but my best experience with OpenBSD was in a VM – and who knows maybe that’s even one of the intended niches for the OS. For my XS35V2 it’s currently a no-go.
A moment later I was already thinking about “lesser evil” and downloading Ubuntu 14.04. However I felt really bad about going back to that toy as well as the impossibility of running a BSD server. Are we doomed to Linux on servers now the same way we used to be doomed to Windows on desktops? Remembering FreeBSD and having considered it before making the move for OpenBSD, I decided to give it a try now. I was hoping that being a couple of times more popular FreeBSD would have those kinds of issues polished away. Luckily I wasn’t mistaken. The same CUPS configuration worked on FreeBSD as well only without the slightest sign of crashes. Hurray! This time I really was good to go. All that remained was to figure out how FreeBSD goes about security and naturally set up all the required services.
I believe the security keyword for FreeBSD is jails. Jails are OS-level virtualization technology resembling Linux Containers/Docker. Nullfs and symbolic links are doing the job of unionfs allowing to share some of the base operating system across multiple jails. On top of that system calls for managing network interfaces, routing and raw sockets are blocked. Each jail features as well separate lists of processes and users. Access to devices is limited by applying devfs rules allowing to cherry-pick device nodes visible to given jails. From inside a jail creation of device nodes is not possible. This turns out to be sufficient set of restrictions to treat jails as sort of lightweight VM. Currently my server runs all of its services in a set of 4 jails which are isolated from one another and from the local network. As unlikely as it is to have someone hack my server (I guess I’m not political enough ;)) if this came to be I would hope to have the damage somewhat contained.
All in all I’m quite happy with this 2-day adventure. I’ve learned quite a bit about FreeBSD and I liked what I saw. On top of that I’ve upgraded WordPress to 4.4 and changed the theme to a hopefully sexier minimalist one-column design. Since I’m not posting all that often – I decided to celebrate each time and take a suitable photo to be used with the featured image functionality of the theme.
To end this pretty long entry, let me just finish by writing Merry Christmas and A Happy New Year to everyone! Let’s hope that 2016 will be even more productive than 2015 and that we’ll enjoy together some reports of my deeds on this new sexy blog theme 😉