Miau Miau Miau, miau!
Miau, miau! Von Valentin Mayr. Katzen sind die liebsten Haustiere der Deutschen. Und sie herrschen über das Internet. Zum Internationalen. Miau, miau, meine Katze ist süß! Miau, miau, aber auch ganz schön fies! 1. Wenn sie schreit an der Tür und ich öffne sie ihr, bleibt sie. Ohren auf, drück hier drauf! - Miau, miau macht die kleine Katze, Soundbuch ab 18 Monate: Miau, miau macht die kleine Katze. Lausche dem Reim und drücke. Miau Miau Katzenklau ist ein schnelles, pfiffiges und kommunikatives Kartenspiel für die ganze Familie. Spielinhalt: Spielkarten; 1 Anleitung (DE/EN/ES). liegt nur den ganzen Tag auf dem Sofa und macht nicht mal mehr miau.  „Miau!“ rief die Mieze, was hieß: Ich habe Hunger! Wortbildungen:  Miau, miauen.
Miau, miau! - Wie machen die Tiere? | Spanner, Helmut | ISBN: | Kostenloser Versand für alle Bücher mit Versand und Verkauf duch Amazon. Miau Miau Katzenklau ist ein schnelles, pfiffiges und kommunikatives Kartenspiel für die ganze Familie. Spielinhalt: Spielkarten; 1 Anleitung (DE/EN/ES). Hier sind Sie richtig: Jetzt bei myToys Miau-Miau - Das Emotionskätzchen günstig online kaufen!
Miau Miau VideoAu Miau 🐶 Desenhos animados em Português - 60 minutos
Miau Miau - InhaltsverzeichnisNeuer Abschnitt. Die eingegebene E-Mail-Adresse hat ein ungültiges Format. Ab dem zehnten Geburtstag zählt eine Katze zu den Senioren. Weitere 3,5 Stunden im Schnitt verbringen Katzen mit Körperpflege. Bis zu Sabine Kraushaar Illustrator Sabine Kraushaar zeichnete schon, als sie gerade mal einen Bleistift festhalten konnte. Das macht aus ihnen gute Jäger, sie brauchen diese Fähigkeiten aber auch, um ihre Affinität zu Kartons und Kuschelhölen auszuleben. Lausche dem Reim und drücke den Soundknopf, dann hörst du das Kätzchen. Warenkorb anzeigen Top Model Spiele De einkaufen. Bitte versuchen Sie es zu einem späteren Zeitpunkt noch einmal. Ihre Botschaft: Alles ist mies. Damit können die Sehzellen das wenige Licht viel besser nutzen. Miau, miau! - Wie machen die Tiere? | Spanner, Helmut | ISBN: | Kostenloser Versand für alle Bücher mit Versand und Verkauf duch Amazon. Hier sind Sie richtig: Jetzt bei myToys Miau-Miau - Das Emotionskätzchen günstig online kaufen!
Miau Miau VideoHam Miau 🐶 ep. 1-10 🐱 Desene animate pentru copii - HeyKids
Miau Miau Neuer AbschnittWeitere 3,5 Stunden im Schnitt verbringen Hoeny Gamer mit Körperpflege. Bewertung schreiben. Sie Yahtzee Versions das Licht und wirft es zurück. Ohren auf, drück hier drauf! Leider nicht verfügbar. Die eingegebene E-Mail-Adresse hat ein ungültiges Format.
Ihre Katze oder Ihren Hund gegen Unfall und Krankheit zu versichern, lohnt sich, denn Behandlungen durch den Tierarzt können schnell teuer werden, wie diese Alltagsbeispiele zeigen:.
Die Tierversicherung wau-miau funktioniert wie die Kranken- und Unfallversicherung für Personen. Ärztliche Behandlung ist Vertrauenssache. Deshalb überlassen wir Ihnen die Wahl des Tierarztes.
Wichtig ist, dass es sich um einen Tierarzt oder Therapeuten mit eidg. Einen Tierarzt in Ihrer Region finden Sie unter wwww. In Zusammenarbeit mit Coop Rechtsschutz.
Wir helfen Ihrem Tier, sollte es in eine Notlage geraten, organisieren die sofortige Rettung und Bergung sowie den notfallmässigen Transport mit der Tierambulanz.
Die Suche nach einem vermissten Tier erstreckt sich dabei auf eine Zeitdauer von bis zu 6 Monaten seit Verschwinden des Tieres. Dadurch wird die schwere Erkrankung oder Verletzung Ihres Tieres zum versicherten Ereignis Ihrer bestehenden Personenreiseversicherung.
Es ist dabei unerheblich, bei welcher Gesellschaft Sie Ihre Reiseversicherung abgeschlossen haben. Die Leistungen von wau-miau richten sich nach den entsprechenden Vertragsbedingungen und sind auf folgende Summen limitiert:.
Sie möchten in die Ferien gehen, doch der Hunde- oder Katzenbetreuer, der sich in Ihrer Abwesenheit um Ihren Liebling kümmern sollte, fällt wegen Krankheit oder eines Unfall aus?
Und eine Ersatzperson steht nicht zur Verfügung? Wenn Ihr Tier gesund, älter als 3 Monate und jünger als 6 Jahre ist, freuen wir uns, es in die Versicherung aufzunehmen.
Einmal aufgenommen, kann Ihr Liebling bis zu seinem Lebensende versichert bleiben. Die wau-miau Haustierversicherung verlängert sich automatisch um ein weiteres Jahr, sofern sie nicht mindestens 3 Monate vor Ablauf gekündigt wird.
Sämtliche Prämien verstehen sich inkl. Stand November Leistungs- und Prämienänderungen vorbehalten. Durch die Nutzung unserer Website erklären Sie sich damit einverstanden.
Weitere Informationen zum Datenschutz und über Cookies. Jetzt buchen Kontakt Schadenfall Notfall. Tierversicherung wau-miau.
Jetzt buchen Leistungsübersicht. This base texture is based on a binary image where each position represent where the data is allocated. Inserting the binary texture in the reduction process the algorithm returns the amount of data values in the texture.
Using this texture in the reduction process allows to know the total vertices required to generate. Once the reduction process obtains the total amount of vertices, the compaction reorganize the texture from the active voxels step, generating contiguous offsets of the same voxel information as many times as vertices are defined for each voxel.
The following image shows the difference between a compaction of a binary texture and the texture obtained from the active voxels step. Since the compaction process also returns a unique key for each offset, the result represents the memory required to allocate the amount of vertices to generate, arranged by voxels and indexed by local voxels offsets.
This means that the same shader can be used the generate the vertices and normals using the marching cubes algorithm. This fragment shader is also run over a quad that represents the 2D representation of the 3D voxel space.
The uniforms for the programs are explained below. This is done to avoid useless calculations over the quad. The compaction process reallocate the vertices needed for each active voxels in the fragments that pass the first condition, it uses the masks explained in here using the optimizations for the reduction process and the compaction step.
The result for this first step is the corresponding 2D position of the voxel allocated in the active voxels texture, this 2D position will be repeated as many times as vertices are needed for the corresponding voxel in the subsequent fragments.
Once this position is obtained the shader continues to the second part, the positions and normals generation. The second part of the shader starts calculating the 3D position of the voxel based on the 2D position from the input texture, and it reads the marching cubes combination obtained in the active voxels step, finally it calculates the key offset vertex number of the fragment relative to the corresponding voxel.
With those values calculated the next step is to obtain the corresponding edge where the vertex will be allocated, this is done calculating an index used to read the data from the triTable texture containing the edges.
The index is saved in the mcIndex variable, and the corresponding edge saved inside the variable called mcData. In the current implementation the shader uses masks to discard the edges that are not required inside sums of all the possible corners and the position of the voxel.
The shader outputs two textures with the compacted data for positions and normals shown below. Notice that the current implementation recalculates vertices for adjacent triangles, meaning that much of the information from the last two textures is repeated.
The idea is to compact the information containing only the edges 0, 3 and 8 from the active cells step with the second histopyramid and calculate the corresponding data vertex positions and normals for 0, 3 and 8 edges with the last shader explained.
The indexes can be calculated associating the original texture obtained in the active cells step, and the one with the 0, 3 and 8 edges.
Tests done with this approach show that the current hystopyramid implementation becomes the bottleneck of the application in terms of time execution, hence requiring a second one for the indexes brings more penalties than benefits for performance.
With a faster stream compaction method the idea of generating indexes would improve the performance of the algorithm in general.
Divergence and Branching: The reader will notice that the last shader uses two conditional branches, one to discard the fragments where their 1D positional keys are higher than the total amount of vertices to generate, and a second one for the selection of the gradient type.
In order to render fluids in webGL from particles simulations, an implicit surface method has to be used to create a mesh from a point cloud that represents the current state of the simulation on each frame.
To do so there are many algorithms that can be applied, being the Marching Cubes a good approach for webGL since its implementation can take advantage of histopyramids to accelerate the process in the GPU.
This algorithm is an iso surface extraction method from a potential field that uses a divide and conquer process locating the surface inside a user defined 3D grid structure voxels , generating triangles where the iso surface intersects the voxels of the 3D space.
The intersection of the iso surface with each voxel can be defined by the cut of each of the edges from the cube that represents the voxel.
The algorithm checks the value of the two vertices that define the edge in order to check if one of the edges from the voxel has been crossed by the surface.
Each crossed edge from the voxel generates a vertex position that will be part of one of the triangles to create in the corresponding cube, and since there are a limited amount of edges a finite amount of triangles positions can be defined, being a total of possible combinations present in the algorithm with up to five triangles per voxel.
The final vertex position between the two borders is evaluated based on a linear interpolation of the potential value from the two vertex of the edge, using the iso surface value as the input parameter for the interpolation.
Since the calculation of each vertex position can be done individually the process can be easy parallelized using the GPU.
Also the total amount of vertices to generate can be obtained in advance, this allows to allocate the required memory to generate the geometry, since the process relays heavily on tables that define the amounts of vertices per voxel.
A common approach to implement the marching cubes algorithm in the GPU is to use geometry shaders since the method is expected to generate a set of vertices from a single voxel query, this represents a big limitation in webGL because the current pipeline only supports vertex and fragment shaders.
To overcome this impediment histopyramids can provide an effective way to allocate the required vertices needed to create the triangles.
The complete process for the surface generation can be separated in three different blocks, which are the potential generation, active voxels evaluation and finally the generation of the corresponding vertices positions with its normals for the triangles to render.
This three blocks are defined in the following diagram. The marching cubes algorithm requires a potential field that has to be generated from the particle cloud.
This is done using a separable blur that spreads the particle data generating the required potentials, the resulting blur creates a simulated non signed distance field blurring in the 3 main axes where the coefficients and the the radius of the blur control the smoothness of the surface.
Since the blurring process is a quite demanding step it can become the bottleneck of the surface generation, it all depends on the size of the 3D texture used to allocate the voxels where the particles reside.
In order to improve the performance of the blurring process a 3D compacted texture is generated, using the RGBA channels of the texture to compress the different depth buckets of a conventional 3D texture.
The idea is to take advantage of the vector capabilities of the GPU to place all the voxel data in the four channels of each fragment, that would allow to blur the 3D texture on a smaller texture size.
A scattering program is written to populate the 3d texture where the vertex shader is responsible to allocate the particles inside the texture, and the fragment shader only assigns the corresponding color to each fragment.
The following shaders are used to generate the required program. The vertex shader uses the attribute aVertexIndex2D to define the UV required to read the 3d position of each particle represented as a vertex, these positions are saved in the uPositionTexture uniform.
The constant c3D is a vector that is used to allocate the data in buckets that simulate the array of depths from a 3D texture. The corresponding channels are defined below: c3D.
Usually 8 or 16 buckets. To do so the vertex shader uses the uOffset uniform, this value represents the displacement in the depth axis in units that the shader should offset to render the particle.
This would represent the cube using the different slices to represent the size in 3D. One depth value is saved in the variable zLevel to define which channel will be used, since only 64 buckets can be represented per channel the zValue will define a value from [ Once the 3D texture is filled with the particles a second program is required to generate the potentials using a 3D blur.
This is done using a quad pass, and two different fragment shaders, one for the blur corresponding the non depth axis, and another fragment shader for the blur corresponding the depth axis.
The fragment shader for the depth axis is defined below. Since the 3D texture is simulated using contiguous buckets in the 2D texture, the blur for the depth axis has to take into account the depth range for every fragment used in the blurring process, hence a vector is defined to know if the depth of the fragment to use is in the range of the depth from the fragment to blur.
The three previous scenarios are used to match the compressed RGBA data between the two values, these equations are not necessary for the two other axis non depth ones since the blurring is done inside the same depth bucket.
For the non depth axis a simple box blur is used defining the blurring direction based on a user provided uniform uAxis, this allows to make the 3D blur in three different separable passes.
Notice that the filtering done is based on box coefficients, but these could be modified to make a gaussian filter or any type of filtering that could provide better results for the potential to simulate.
The following image shows the result of the potential generation for the two previous 3D textures, notice that for the left image the buckets are separated avoiding blurring between the buckets.
It can be seen in the left 3D texture that the appearance of the green and dark blue colors show how the different depths are being blended, meaning that the potential is being softened among the different depths.
Once the potential is obtained, the marching cubes algorithm requires to evaluate the values in the corners of the voxels. In order to avoid software trilinear interpolations another quad pass is done over the result from the 3D blur passes to calculate values in the corners.
Notice that the corners evaluation is done using the compressed blurred texture from the previous pass, this means that the same blending equations have to be used to match the different channels based on the depths regions of the fragments to use.
With this final pass the algorithm has two potentials textures, one for center values and other for corner values, this last one is the texture used for the marching cubes steps.
This is also done using a fragment shader defined below. The shader evaluates the 3D position of the fragment in the expanded texture and search for that value in the compressed texture, using the depth range to define with RGBA channel will be read to obtain the corresponding information.
The image below shows the result of the expansion from the texture. A speed up to 4X is gained based on the fact that the compressed texture area is 4 times smaller than the non compressed one.
The right side shows the expanded result where the depth buckets can be inspected visually without color interpretations. One of the big limitations when doing GPGPU computing is that since the architecture is designed to work on parallel, re arranging or compacting data is a non trivial task on the GPU, to overcome this limitation different stream compaction methods have been created, being histopyramids the algorithm that will be discussed in this post.
The following image shows what is achieved with stream compaction. The texture represent a set of slices from a simulated 3D texture showing the different Z levels, in a 2D representation the data is scattered all over the texture, The colored line represents the same data compacted and displaying the 3D position of the voxels reallocated.
Implementing histopyramids is not a straightforward process, it requires two different steps which are reduction and traversal compaction.
The first phase, reduction, is the process where the algorithm generates a set of textures levels based on the original texture base texture holding the scattered data; the amount of textures to generate in the reduction process are based on the size of the original texture using the following equation.
For the reduction process, a base texture has to be generated from the original data texture, the idea is to define a binary image where each position will be marked with a RGBA value of vec4 1.
This first step is quite similar as the mipmapping generation from the GPUs, but instead of using a median reduction for the generation of each texture, each pixel of the parent level of the pyramid is calculated using the sum of the four pixels contiguous of the leaf level texture.
At the end of the reduction process a pyramid of images is created where each level represents the sum of the pixels from the leaf levels. Once the reduction process is completed and the pyramid is generated, the traversal is required to compact the data.
Each one of the output elements of the resulting stream has to traverse the pyramid to finally obtain a UV position texture coordinates. This texture position relates the output stream with a data value from the scattered texture.
The traversal is done using a key index for each texel from the output texture, meaning that each pixel would be associated with an incremental unique 1D index that would be used to compare against the different levels of the pyramid.
Also each texel from the pyramid would be re interpreted as a key interval value. This second step starts at the first level of the pyramid, checking if the key from the texel to evaluate is inside the total range of the data to compact.
To do so the key value has to be lower than the total sum allocated at the top level of the pyramid. Each traversal step evaluates where in the ranges the key index of the output stream can be allocated or falls into , that range gives a 2D texture position for each level traversed.
On each new level a new set of ranges have to be generated and a new UV position is extracted based on the range where the key falls into.
In the final phase, the base texture is used and the ranges would be of a unit difference, meaning that this step gives a UV coordinate that can be used to read the corresponding data in the scattered texture and place that data in the output telex from the stream.
The key evaluated falls in the B range, hence for the next level the algorithm should use the four values allocated in the [0, 1] position, in this level the new ranges are defined as:.
This coordinate would be used to read the data from that position in the scattered texture and write the read value inside the position relative to the key used to traverse the pyramid.
This tutorial also assumes a certain level of understanding of basic webGL operations like shaders compilation, quad generations and the use of the different tests scissor, depth, stencil that can be applied in the framebuffers.
Since all the important operations will be done in shaders, there are two main programs that have to be created, a reduction program and a traversal program.
All the programs will be performed over a quad, meaning that the vertex shader is only responsible to generate the vertex positions of the quad and to provide a UV coordinate for each fragment in the fragment shader.
The vertex shader evaluates all the data for the vertex on execution time, but the user can also provide attributes with positions and UV coordinates for each vertex if desired.
The fragment shader works similar to a mipmapping shader, hence four texels from the input texture have to be read and the resulting color would be the sum of those four values.
This can be done with the following shader. The fragment shader reads the adjacent information from the evaluated fragment and sums that data to obtain the final fragment color.
Since this shader is applied to all the different levels of the pyramid during the reduction process, it requires to know the actual size of the texture evaluated, this is provided in the uSize uniform being more conveniently defined as the inverse of the size value.
For the traversal process there are two ways of addressing the problem, writing the traversal in the vertex shader or writing it in the fragment shader.
Since a vertex shader for a quad generation is already defined this new program will make the traversal in the fragment shader using the following code.
The previous traversal shader requires 3 important inputs that are defined below:. Notice that this data is mostly used to evaluate the 3D position based on the UV position obtained in the final traversal this traversal is adjusted for a marching cubes application , so on a generic case only two values would be required for this shader, the size of the base texture and the amount of levels generated in the reduction passes.
Having a single pyramid texture avoids the use of different uniforms for each level, which also limits the shader to a defined amount of maximum levels based on those uniforms.
The image below shows an example of a Pyramid texture generated in the reduction process. To avoid the use of if branching the traversal evaluates the first three ranges using vector masking and logical comparisons, the final range comparison can be evaluated as the difference between the total range of the whole loop and the first three partial ranges evaluated before, that avoids to evaluate another texture read.
The use of a unique pyramid texture requires to use an offset position that has to be updated for each new loop, this offset has to be included inside the UV coordinate obtained to traverse the next level.
Since the base texture is not included in the pyramid texture, the last traversal is done outside the loop, this final step uses the base texture to define the final UV positions required.
In that regard using the fragment shader for the traversal has the advantage of avoiding positioning the vertices with the vertex shader. If the key is higher than the value from the texture the fragment can be discarded.
The second function allows to generate a frame buffer based on a texture provided, it has a color attachment for the texture and a depth attachment for z buffering if required.
With the previous function a set of textures are generated for the pyramid using the following code:. RGBA, gl. The baseTextureSize variable represents the length of the original scattered texture.
The textures and frame buffers are saved in two arrays to be used in the compaction function. The compaction function is where the shaders will be executed, to so do three programs had to be generated beforehand using the shaders explained in the previous sections.
The three required programs are:. It also requires to declare the needed uniforms for the reduction step, mainly the sampler used for the reduction, and the size of each sampler.
The uniforms used are defined in the previous explanation for this shader. To use the compaction function the three programs have to be generated, compiled and have their uniforms and attribute declarations ready.
Pointers to the shaders uniforms are saved in the same array position that holds each program as a dynamic attribute, these pointers are used to send the data to the GPU on each invocation of the compaction function.
The first step, reduction, invokes a set of quad draw calls on the different pyramid textures using the reduction program, for each iteration a new size is calculated and the corresponding uniforms are send.
After the new level is generated the end result is copied to the pyramid texture using the copySubTexImage method, using an offset to separate each level of the pyramid in the same texture.
This means that if the user opts to read the data on every frame the total framerate of the application can be hurt just trying to have a precise amount of data to evaluate after the compaction process.
To overcome this limitation an expected amount of data is defined as part of the arguments of the function to supply a way to avoid the use of the gl.
To calculate the region of fragments for the scissor an area defined as the size of the texture and the ratio between the totalCells divided by the size of the texture is used, if the total cells are not read using the second step then the expected amount of cells is used.
With the scissor test and the discard method the traversal function is accelerated. The function returns the total cells read in the second step, or the expected cells defined by the user the value gets unmodified , it also saves the compacted data in the provided frame buffers ready to be used for subsequent GPGPU operations.
The previous function used to helper functions to bind textures to the corresponding program and to bind attributes too, these functions are defined below.
The previous shaders are a straightforward implementation from the algorithm explained before, but there is a quite nice improvement that can be done in the traversal shader to jump from three texture reads on each level to only one; this is done using vec4 histopyramids.
The following image illustrates the modifications to the algorithm in order to implement this variant. These kind of pyramids make the reduction in a different manner, since the GPU allows to work with four channels per pixel the reduction shader can be rewritten in order to save the partial sums of the four pixels read instead of the total sum done before.
This can be seen in the following fragment shader. In the previous shader the program saves the partial sum on each channel of the final color, this allows to read the ranges using only the parent pixel, which means that the traversal only requires to allocate one texel from the texture level to traverse.
Also the ranges can be evaluated faster since once the end of each range is defined creating the start values of the ranges is trivial. The following shader shows the implementation of the traversal with only one texture read.
Some time has passed since we have posted anything, so we are going to try to write something somehow interesting.
This post is versed about the making of the one sweet project developed in our studio, it is called translatingmelvin. The project consists of a webcam transmitting the image of a plant in real time, this plant had to mount some control sensors humidity, temperature, lighting , and with the values obtained from them we had to define the mood of a plant called Melvin.
The development also required an artificial intelligent that allowed the user maintain one conversation with Melvin, and the answer would be different depending on the mood of the plant.
Talking with Ruben we realized that there was no interactivity with the installation, so we gave him some ideas like activating one air pump that could create bubbles in a water container, or dropping soap bubbles, or maybe turning on or off one light, perhaps watering the plant, or opening a curtain close to the plant.
Sadly no one came to prosper. Melvin also had to emit via Twitter the generated answers in the conversations with every user.
We also had to develop a system in order to talk with Melvin from Twitter using the hashtag translatingmelvin. With this we had the social part of the project covered.
We had been working some time with one data acquisition hardware from National Instruments, using LabView and some sockets connections to perform hardware control of the things we could do in Flash.
Even though, for this project, we decided to give a try to Arduino and Operframeworks. From this blog we would like to congratulate and send our admiration to the teams of both projects, Arduino and Openframeworks are very powerful solutions and very well documented, with a quite big community of people giving solutions to issues more or less difficult.
And the best part of all is that all this information is given in an altruistic way, and it is available to anyone who has a little interest in learning about these projects.
The first step was to choose the electronic components that we should use. We found a big help from an internet store called Farnell.
The delivery is also very fast; we had what we needed the next day before 12h. This last paragraph sounds like a paid comment inside one post, but when you try to find a humidity sensor in your local electronic store you feel relief that Farnell exists.
We knew that we were going to connect the components to an Arduino, at the beginning we were going to use an Arduino UNO, but finally we decided to connect everything to an Arduino Mega since we needed more analog outputs to move seven relays.
We had to use one capacitor and isolated cable as is explained in the datasheet of the sensor. The capacitor makes the signal stable along the way, and it allows us to read it with a sufficient intensity to be read without losing any bit in the journey.
To obtain the light intensity we used a Vishay Siliconix BPV22NF photodiode, this is an analog sensor and it was as easy as to send some current intensity to the enter pin and read the current intensity of the output pin.
For the temperature we used a LM35 sensor, this analog unit works in a similar manner that the photodiode, you feed it with current and you obtain a variable output that defines the actual temperature.
To turn on the light bulbs that represent the mood of the plant we had to commute the V alternate current using the 5V direct current from Arduino, so we also needed a set of solid state relays.
The bad news was that it was not possible with the actual components that we had selected. The basic circuit is shown in the next image.
The previous circuit is controlled using the values obtained from the photodiode, as the lighting from the room increases, the lighting from the leds increased too and vice versa.
This way we found that the image from the webcam was never burned and people could read the icons from the mood in low lights conditions.
This way we could send and read the volt from the pins of the sensors, or we could send volt to any given relay to turn on or off the v bulbs, or modulate the current applied to the transistors for the Leds dimming.
One of the issues we had to deal with was that the sensor SHT1x the one for the humidity and temperature has a protocol that requires to send pulses with a nanoseconds frequency.
Using Firmdata to communicate with this sensor is not adequate. The solution for this problem was to move the communication code into the Arduino Board, since we can send high frequency pulses from the board to a specific pin.
Finally we had to merge the Firmada Standard code with the communication code for the sensor. Once we obtained the humidity and temperature values from Arduino, we had to send those values back to Openframewors, this was made hacking the code from Firmdata.
We defined to virtual pins of Firmdata to be the ones that send the values obtained from the sensor, we also had to insert one exception in the read and write pins buckle of Firmdata so it would not read or write anything from those pins.
The politics of Flash about crossdomains require that if you want to establish a net connection via Sockets with the navigator, the socket server must send, by a specific port, on string containing a Crossdomain.
This was made leaving the port opened and dedicated for this only task. When someone established one connection through the sockets to this port, this function was called:.
Once the exchange of files was made, the connections between the designed port and the client could be initiated, that allowed us to obtain persistent connection in real time with a web browser.
It is very easy to implement and it is very stable. We made proof with more than connections at the same time and the application worked just time.
The next video is recorded on real time with an AMD Radeon M HD 2Gb , feel free to watch the video if the simulation does not run in your computer.
If you have seen the demo you will probably know this blog , here I found that the fluids where made using SPH Smoothed particles hidrodinamics for the simulation, so now I had a place to start searching for information.
What I found was some good links that explain very well the maths behind the system. If you want to understand how the simulation works you should read the paper from Matthias Müller , it explains the basics of fluid simulations using particles to calculate the properties of the fluid.
You can also read this paper from Harada , this last paper is best suited for a GPU implementation. There are two more links you should reed, Micky Kelager also explains the SPH, but the most important here is that you find how to deal with the collisions in the simulation using distance fields and he also defines some ways to calculate some important coeficients for the simulation.
Finally the GPU GEMS 3 has a good chapter for rigid body simulations that explains very well the creation of uniforms grids to find the neighbors of one.
The process have some few steps explained in the next image, the most important of then are the neighborhood search and the velocity update. I do not explain too much in here because the previous links explain much better than me the implementation of SPH in a GPU.
One of the most important thins in the simulation is to find the particles around the fragment particle, because they are responsible of the forces that will be applied to the simulation.
We implemented a uniform grid based on the paper from Harada. This grid is a 3D voxel partition of the space that allows to keep track of up to four particles located in the voxel.
Using the SPH to simulate fluids require to define some coeficients that alter he properties of the fluid. These are:. If you change this parameter also change the quantity of particles for the simulation.
If this value is too slow, there will be no particles to interact for the fragment particle, so there will be no simulation, but in the other hand, if the ratio is to big, the forces wil not be smoothed properly, and there will be some instabilities in the simulation.
You can also see the position, velocity and density field applied as shader colors in the particles changing the shader option in the control panel.
Finally you can swith from point to billboard sphere rendering method. All these variables can be updated with the control panel, once you update them you only have to restart the simulation.
Simulation showing the densities of each particle. Simulation showing the position field of the particles. Simulation showing the velocity field of the particles.
The simulations works fine with these configurations:. It is our first version of the implementation, so we hope to make it faster using a more convenient neighborhood search.
We hope that future posts will go on two branches, SPH optimizations and particles rendering using marching cubes.
When we try to code anything in our studio, the first thing we do is to search for information about the subject in order to find papers and techniques that could help us to develop what we want.
The work made by these guys is amazing, but more important is that they talk about all the methods used in their works, so you can start searching by your own if you like to do something like.
In our case we wanted to make some particles animations like the ones found here , and they have the perfect starting point in this post.
Two main things we would have to deal with in order to get a simple particle animation, these are:. In this paper the author defines a divergence free noise suitable for velocities simulations.
Using divergence free potentials is important for a flow simulation because it avoids sinks in the flow try to think that with this you will not get all the particles going to a single final position, like a hole in space.
Implementing is quite simple because you only have to calculate the differentials of the potentials noises used to create your field.
To create the potentials used for the velocities we used the equations from the next image. The only assumption that we made is that every scalar component from the potential vector field is a 2D function that would satisfy the rotor operator.
The second group of equations explains our assumption, we defined each axis potential as a function of the other two axes in order to use 2D textures for the tree potentials, this way we do not need 3d volume textures for each potential.
With this assumption we ended with tree 2d textures that defined our potential so the velocity would be defined as the rotor operator applied to the potentials field as you can see in the equations.
To calculate the partial derivates we only had to calculate the finite difference the given textures, and the good thing is that since we have our point defined in a texture space we could use this position to get the potential value for the tree defined potentials.
Now that we know how to create the velocity field we needed tree good noise textures to work with, this is an issue because if we load some external textures our development would be tied to those images.
So we decided to create the noise textures by ourselves. With the noise functions solved we only needed to make it turbulent, so we found that we could add higher frequencies to get a turbulent noise, you can read a full explanation here.
This is how we ended creating our own textures that we could change in realtime not in every step , and would allow us to find the best noise suitable for our needs.
In the next image you can see the same base noise with a without turbulence. Saving the potentials as textures add a new problem to solve, textures in webGL only saves 8 bits per channel, so if you naively try to save the potential value in one channel you will only save a units space value.
We use two functions to pack and unpack floating point values in the textures, these are defined like this:. The first function wraps one value in RGB components, so you can save one value with very high resolution, the second function unwrap the RGB components to the original value.
Note that the function do not save negative values, so is up to the programmer to define how the unwrapped value is going to be treated.
The bad news about it is that you need a whole texture to save one value, so if you want to save one 3D position of one texture you will need three textures for it.
The last paragraph defines the main bottleneck of the application, since we have to save the particles positions to run the simulation we need to traverse all the particles three times to save each axe position in a different texture.
In the next image you can see one of the axis aligned position textures used. Since we can set many more slices we saw that the interpolation used between two slices was not necessary, so we took it away to get some more speed.
With this in mind if you want to implement good quality shadows in your volume we recommend 16X16 buckets slices, but the final shadow for the floor will be very slow.
In the next image you can see one test of the shadows with and without color. With the curl noise, the volume shadows and the floating point issues solved we ended with an algorithm of seven steps, this process requires five render targets changes and we need to traverse all the particles five times.
We also perform some window, or 2D, calculations over the fragment shader to get the correct blurring for the final composition, we also calculate the potentials using the fragment shader.
The other thing is to use another texture to control the life of the particle. The previous commented process have a huge bottleneck saving all the particles positions, so we wanted something faster to get the particles position.
So instead of calculating the positions in a step way, we defined the positions based on a path a Bezier path , this Bezier would be affected by the noise so we only would need one texture to save the Bezier points.
So if you can have as many paths as the height of your texture, and each path can have as many points as the width of the texture. In our case we only used to paths, you can see some images of the implementation.
The previous image shows the positions texture using two paths, and the 3D result of the Bezier applying the curl noise. The bad thing about it is that all the particles are constrained to the Bezier path, so we cannot perform some particles dispersion.
Instead of applying the curl noise to the Bezier path we could use some of the total particles to create a volumetric path, and use some more to make some particles dispersion with the first technique.
In the next image you can see how the Bezier paths are rendered using just one texture. In the last post we talked about the implementation of volume shadows in a particle system, this last approach used a couple of for loops in order to define the final shadow intensity for each particle.
So we tried a pair of optimizations that would deal with the defects of the previous work, the first thing we have done is to change all the texture reading to the fragment shader, and then we tried to make the sum of the shadows for each step with the fragment shader also.
At the end of the working process we defined the next steps to get the shadows done. The first step is no different that the last approach, we defined 64 buckets in the vertex shader using the depth of the particles assuming that all of them are inside a bounding box, then we render them in a shadow framebuffer defining a color depending on the shadowDarkness variable.
For this step we use the same getDepthSlice function in the vertex Shader. There is one very important difference in this function from the previous one, this new one requires two parameters, one offset and the transformMatrix.
The second one is used to tell the function which view camera view or light view is used to define the buckets, this is usefull If you try to use the buckets system to sort the particles and blend the buckets.
The first parameters offers you the possibility to obtain the next or previous bucket from a given depth, with this we can interpolate from two buckets or layers, that would give us a linear gradient of shadows between two layers instead of 64 fixed steps.
This step is the main optimization from this code, what it does is to blend the previous bucket into the next so you can define all the shadows for each layer in 64 steps.
Blending the buckets require to use the gl. BLEND and to define the gl. ONE and gl. Then you have to render one quad to the shadow framebuffer using the shadow map of the shadow framebuffer as a texture, the most important thing here is to get the bucket coordinates for each of the 64 passes for the quad.
This is also done in the vertex shader, with each pass, one depth is defined for the quad and the vertex shader defines the UV texture coordinates using the next chunk of code.
The next image shows the result of this blending. This step has no difficulty, but requires the use of a composition framebuffer for a intermediate destination, this is made to avoid the forward loops in one texture.
We defined the blur in two differents passes, one for the X component and another for the Y component, we used a simple box blur but you can perform a gausian blur or some other blur you like.