The urban legend is tall and long - Dude, one programmer on a raspberry pi hosted a billion users and the code ran for like a million years! Well, I think the hype might be exaggerated, but the truth is pretty awesome. There’s so much going on in Erlang/Elixir that at first glance it can be overwhelming for someone coming to it with zero experience. And, making matters worse, coming to it with ZERO experience with functional programming is an even larger leap.
So, I started the journey with basically zero experience with functional programming and no experience with Erlang/Elixir. And with a lot of reading, writing, and question asking - in roughly 12 weeks - I’ve become very comfortable with Elixir, OTP, Metaprogramming, and even some Erlang. I thought I’d write about how to navigate the seemingly daunting process of becoming proficient.
You need to first get a development environment setup. This encompasses the Erlang/Elixir environment and an editor of choice. There are essentially two choices: building from source or a employing a package manager:
Regarding an editor, the best option here is to use your editor of choice, and then add the subsequent extensions to give you the fancy syntax highlighting and autocompletion to make your entry into Elixir that much easier. I use both Atom and Vi equipt with the above extension. For those using Sublime, you’re in luck with the aftermarket add-ons also available to ease your editing.
Okay, you successfully installed Erlang/Elixir, set up your editor - now what?? Well, we have a few thing we need to do first. We need to step back and take a tour of the Elixir language and learn about the syntax, structures, and programming paradigms that broadly encompass the language. To do this, we need to understand how to interact with the Elixir environment and to experiment and understand what these things are and how they behave.
I’d start with the official website - specifically, the Getting Started section. Work through all 1-22 sections.
The first section is going to introduce you to the Elixir Interactive REPL - iex. This is an indispensible tool to play and experiment, and learn!! Along with iex there is elixir and elixirc a script interpreter and compiler respectivly. Finally, there are escripts which are binary files that execute from the command line. More on that later.
One cool thing about the REPL is that there is a TON of documentation built into the tool. Virtually all of the code is documented in iex. Start off by asking for help by running the help function.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
|
There’s a lot to learn in these first Twenty-Two sections:
Pay close attention to these major concepts: - Pattern Matching - Processes - Recursion - Enumerables and Streams - Lists, Maps, and Tuples.
Focusing first on getting these basic Elixir concepts down will make learning the other more interesting topics far easier. Once you feel that you’ve mastered these Introductory elements, take a look at the meat of Elixir:
Now that you’ve taken a broad cursery overview of the language by reviewing the website, familiarizing yourself with the tools, and experimenting with the language using the iex REPL, it’s time to dive into a few books to add to our newly developed perspectives.
I found the following books very helpful:
And in addition to the website and the books, I strongly recommend joining the elixir slack channel. There are so many knowledgable and generous people on the Slack channel willing to help you understand concepts that may seem foreign or opaque to the noob. Ask away!
So you’ve learned the language, experimented with some of the features, maybe even learned how to use the GenServer, Supervisors, and did a little meta-programming. What’s next??
Elixir is well suited for distributed computing. iex starts a Virtual Machine called the BEAM. Elixir makes it easy to connect multiple BEAMS together to solve a problem.
Here’s a quick example of how easy it is to start two BEAMS and get them talking!!
First we need to name each BEAM and give it a shared secret that will be common among all of the participating nodes.
Then we’ll need to connect the nodes and share our code.
Lastly, we’ll need to run the code on all of the BEAMS. Wow, that sounds complicated. And indeed in many languages, contemplating such a task would be fraught with coding challenges and gotchas. Not so much in elixir.
Take a look:
First, let’s start two separate iex BEAM VM’s:
1
|
|
and in another terminal window…
1
|
|
Now we gave each instance its own name, and we set a cookie that allows each to connect to the other.
Next, we need to connect them. That is to say, make each node aware of the other. The simplest way to connect a node is to ping it!
On foo - let’s ping bar.
1 2 |
|
If you didn’t get a :pong back, you’ve goofed. Confirm that you’ve connected the nodes together by listing the Nodes.
1 2 |
|
Now we can define a module on foo and send it to bar. In order for us to send the code to other nodes, we need to have our module compiled into a beam file. So, in your editor on a third terminal, define a basic module and save it to a file in the same directory that you ran the iex REPLs in:
Filename: boo.ex
1 2 3 4 5 |
|
How do we compile this module definition into a beam file? We can use iex (c/2
- see h c/2 for more info), or we can compile it using elixirc.
1
|
|
This will generate a file named Elixir.Boo.beam
. Now we can move this code throughout our connected Nodes using the nl/1
function in iex.
1 2 |
|
This returns a tuple with :ok and a list of three-tuples. The code has now been successfully distributed to the other connected nodes - an we can run it! There are two ways to run the defined function in Boo
. We can type into the iex on bar and run it Boo.hi
. Or, we can run it from foo@hostname.local
.
1 2 3 4 |
|
and to retrieve the value sent from bar to foo we can use flush() thereby retrieving the message sent from bar to foo’s process:
1 2 |
|
It’s that easy! Of course this was a trivial example. But the point is to demonstrate how easily and quickly one can develop a distributed application. Instead of sending a message, we could have just as easily performed a complex computation - using the power of all of the CPUs in the Node list.
]]>I recently received my $9 NTC CHIPs and decided a cool use of the tiny server would be to mate it up with the Adafruit FONA 3G Cellular Breakout and Adafruit FONA 2G Cellular and GPS Breakout to test using both GPS and GSM GPRS via ppp on the CHIP.
Turns out you can’t use the ppp out of the box on the CHIP as it’s not part of the kernel. Depending on which flash you use, you might not even have USB-Serial support (via PL2303 or FTDI USB-to-Serial cables).
So, roll up your sleeves and let’s build us a new kernel! At first blush, this sounds far more daunting than it should. Having been a Unix kernel hacker for far too many years than I’d care to admit, today’s modern kernel build environment takes a lot of the sting out of configuring and building your custom kernel.
We need to start by setting up an environment and pulling a code base to allow us to configure and build our kernel
The build environment will consist of setting up a vagrant environment for the CHIP-SDK. There are great directions for doing so on the CHIP Documentation site. Take a look at these instructions on setting up the CHIP-SDK vagrant vm.
Before you grab the source, you might need a few tools to make your vm ready to rock and roll. Specifically, add the following with your favorate package manager:
1 2 3 |
|
and
1 2 |
|
Once the vagrant machine is setup, clone the CHIP-linux source onto the machine into a working directory, say CHIP-linux
1 2 |
|
Next, check out the debian NTC release…
1 2 |
|
Now you have the source, and the tools, and are ready to roll. But before you start the build process, we might want to start with our baseline config from our existing CHIP. This isn’t a requirement, but it will allow you to either add to or subtract from a known kernel configuration.
To grab your config file, simply:
1
|
|
You might run into problem or two here depending on your CHIP Linux configuration. One, you may not have a root password! You might need to set that. Secondly, you might not have configured sshd to allow root login. Be sure you’ve remedied both of these issues as we’ll use root login throughout the kernel build process.
To configure our kernel, we simply invoke the makefile with the following:
1
|
|
This will bring up an easy to use menu-driven application that allows you to select or delete items that you wish to have in your kernel.
Remember that config file that we copied over from the CHIP? Let’s load that as our starting point.
Now make the modifications that you wish to your kernel. In this kernel, I want to add ppp support as well as ppp async-serial support. I select both of those, and then save my configuration file as .config
.
Before we actually save the configuration, it’s a good idea to mark your Kernel so you can easily verify you’re running the kernel that you built. To do that we add a Local Version
marking to the kernel release. Under the following Menu item:
- General setup
- ( ) Local version - append to kernel release
Add a tag that marks your kernel. I use rb-2016-08-01.0
for example.
Now I’m ready to actually compile the kernel that represents the configuration that I just selected. To do so, simply run make
as follows:
1
|
|
where n
is the number of cores that you have on your box.
Once the compilation successfully completes, you’ll next need to install the modules into a directory that we’ll use later to place them onto the CHIP.
1 2 |
|
Add the RTL8723BS
: This is not part of the CHIP-linux source tree.
1 2 |
|
And build it…
1 2 3 4 5 6 7 8 9 10 11 |
|
Now that everything is compiled and built, you’re ready to copy it to your CHIP!
1 2 3 4 5 6 7 |
|
Now to boot your new kernel, there are two choices.
Simply copy the existing kernel so you have a backup
1 2 |
|
Should something go wrong with the new kernel and you can’t boot, you’re hosed. There’s another way.
The more complicated, but far more flexible approach is to add a serial USB-to-Serial connector to the chip. You can use a PL2303 and connect it as follows:
1 2 3 |
|
And on your computer use screen
to access the serial port:
1
|
|
Keep in mind that the actual device name may change depending on what OS you are using. I use the linux device name. Also, pay attention to the digit at the end, it might be some number other than zero.
Before you connect the serial port, be sure to first copy the orgiginal kernel so you have a backup. And then shut down the CHIP before you connect the serial cable.
Once all connected (both litreally with the USB connector as well as the screen command), type a key to stop the U-Boot booting process and fall into a u-boot shell.
Now you can change the bootcmd
and boot the kernel that you copied to the CHIP in the previous section. In this case, if there is a problem booting your new kernel, you can easily boot the backup kernel instead.
Wah Lah! You have successfully booted your new kernel!
If everything went as expected, you should be able to now connect your FONA 2g or 3g Cellular GSM breakout to a PL2303 and connect it to your CHIP. Be sure to use a lipo battery as well as a USB power source for your FONA. Turn it on, and verify that you can access the device.
Here again, I use screen /dev/ttyUSB0
to access the FONA. Verify that the PL2303 is accessable to the CHIP via lsusb
and you should see the PL2303 signature. If you don’t see it, you likely didn’t configure your kernel properly to include the USB device.
But if you did read the directions carefully, and have the device, and are able to screen
to it, you should be able to communicate with the FONA by sending it AT
commands, and receive back from the FONA a response, like OK
.
Make sure you’ve added the ppp
package.
1 2 |
|
If all’s connected, and you’ve successfully added the ppp
package, you can now configure the ppp setup.
1 2 3 |
|
Now edit the file and change the MUST CHANGE
items. There are essentialy two of them
The first one is the APN. In my case I am using Ting Wireless. The APN is wholesale
The second item to change is the serial device. Make sure the device name that is connected to the FONA is specified in the file. For example, /dev/ttyUSB0
.
Save the file, and give it a go! To turn on and off ppp, use pon`` and
poff“` respectivly.
1
|
|
You can verify that it worked by looking for the interface. A successful ppp session will result in a device shown when you run ifconfig
. If you run into problems, take a look at the log files for details on what went wrong. Common problems include forgetting to turn on the FONA device!
take a look at the ppp chat script in the syslog file:
1
|
|
If you successfully create your ppp connection and have verified it via ifconfig
, try it out! simply ping a website of your choice.
1
|
|
To get a feel for the speed of the connection, grab some data too:
1
|
|
Once you’re done with your ppp connection, simply turn it off:
1
|
|
Now that you have your FONA and your kernel in order, you can get the GPS setup. Start off by adding the following packages:
1
|
|
To start the GPS software, first stop the systemd gps services.
1 2 |
|
And now start the gpsd daemon
1
|
|
Use the cgps
command to run the GPS data viewer. Or for a more detailed data view, run gpsmon
Today I tripped over the OpenWeatherMap and created an account to see what type of weather information I can get from the published api. Cool stuff.
I thought I’d write a quick little golang program to extract the json data and turn it into something that I could send to say(1), and then pump that through darkice(1) and icecast(1) on a raspberry pi! A talking weather station of sorts.
Here’s the code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
|
The given a zipcode, and an API Key, output generates something like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
|
Just a start… check out other API calls too…
]]>A common challenge when creating Cgo and Golang code that needs to manage dynamically created objects is memory management. How do you create an objet and ensure that the object isn’t leaked. The garbage collector will help manage natural native golang objects that are no longer needed once the scope is exited. But how can we do the same thing when we create dynamic memory objects with Cgo?
Turns out the answer lives at the intersection of runtime.SetFinalizer()
and runtime.CG()
and a wrapping up the objects in a golang struct.
Here’s an example of a golang program that allocates memory in C, and then frees it with a call to the finalizer:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 |
|
So what’s going on here? First we create a struct that will contain our dynmaically allocated memory, CFoo. Inside of CFoo I threw in a Mutex, a name, a allocCnt to allow us to track the allocations and releases of the memory, and lastly memory of type unsafe.Pointer
that will point to our dnyamically allocated object. The mutex and the count are soley illustrative and aren’t likely needed in practice.
Our CFoo struct contains a function alloc
that will do two important things:
When we create the memory we assign it to our struct member. The object CFoo will exist until the context is no longer valid. And when it’s invalid, the finalizer will be called when the garbage collector is invoked.
the `free(c *CFoo)
function will take our CFoo and call the cgo free()
to release the memory.
It’s that simple!
]]>Okay, so everyone has a blast playing with the < $50 raspberry pi, but what if you take it up a notch. I mean drop a few extra bucks (more like $160 extra) and you can have some serious power in a small little footprint. The Jetson TK-1 is a Kepler GPU with 192 cuda cores and 4-Plus-1 quad-core ARM Cortex A15 CPU. That’s a lotta power for $192 (retail). But wait, there’s more. It comes with 1Ge ethernet connector, 2Gb ram, 16Gb eMMC memory, USB, Audio, SATA, HDMI, and RS-232 serial port (old school).
The board supports a derivative of ubuntu 14.04 called L4T. Additionally, there is support for Cuda 6.5, OpenCV, and OpenGL, as well as samples - NVIDIA GameWorks OpenGL Samples 2.11. Definitely checkout the floating Teapot!.
Aside from all the graphic power offered by this platform. You can use it as a streaming server too. Real-time mp3 streaming is easy as pi! Er, easy as Jetson!!
I thought I’d see how easy it was to setup Darkice and stream live audio from the mic on the Tegra to my Icecast2 server.
To build the source we first need to install all the dependencies:
1
|
|
It took less than a 5 minutes to get it set up! Download the Darkice source and build (configure) it with the --with-alsa and --with-alsa-prefix
, and --with-lame, and --with-lame-prefix
, both pointing to the library directory for the shared libraries respectivly. After configuring and Making the darkice binary, you’ll need to setup a /etc/darkice.cfg
file.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
|
I set the device name to pulse
. Fire it up and throw it in the background and you should see a stream registered on your Icecast2 server!
You may need to setup your audio port on the Jetson.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
and then save the settings using:
1
|
|
It’s that easy!
Helpful links:
#### Next Up: Jetson TK-1 20-node cluster So, I was impulsive… I bought 20 of these boards, loaded up the JetPack and started to ponder - how I am going to power, connect, and house 20 of these boards.
Aside from stacking up the hardware, I’m thinking about the software to glue twenty of these things together too. Want to take advantage of cuda and maybe even mpi… stay tuned!!
]]>Included as part of the built-in golang packages are libraries that provide for programatic parsing and inspecting of golang source code.
The two key packages are go/parser
and go/ast
Let’s take a look Parsing process. To start, we must create an Abstract Syntax Tree (AST). Next, we can Parse a source file (or string), which upon success returns a pointer to and AST, *ast.File
. All the meat is inside the ast.File
1 2 3 4 5 6 |
|
Now we can look at different aspects within the parse. For example, if we want to see what imports are included in the source file:
1 2 3 4 |
|
Lots of opportunities to parse through and look for declarations, functions, specific language mechanisms, expressions, etc. The opportunities are virtually limitless!
There’s a ton of reflection at play in this package. You’ll need to inspect and assert to look at the specific elements within a structure. A close read of the AST Print is a worthwhile exercise :)
]]>Take a look at
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
|
Here I want to highlight a few cool concepts in Go:
This toy program defines a variable x that is an anonymous struct that has an anonymous field that is a sync.Mutux
. Why make this field anonymous? Well, by doing so, we can Lock()
and Unlock()
the structure without needlessly specifying a fieldname. Simply x.Lock()
will lock the Mutux.
Next we create a channel of type that’s equivalent to the unnamed stucture type. We do this by mirroring the definition of the unnamed structure in the variable definition.
Now that we’ve defined our variables, we create a bunch of go routines using a function closure. We define a function that takes an int
as our closure. We then Lock()
our variable, assign a function to memeber x.y
and call the function passing in the integer and returning the result to x.z
. Now that we’ve assigned values to x
we pass it down the channel xx
. Lastly we Unlock()
the Mutux.
At this point we’ve fired up a bunch of go routines, and sent copies of our anonymous struct down the channel.
Next we start and infinite loop (yikes!) and check our channel for results in our main go-routine.
The first case
in the select
checks to see if we’ve received a message on the channel. If we have we print out the value that we assigned to our integer field in our anonymous structure and tally the reeipt of a message on the channel.
If we don’t receive a message, we default to our default
case where we manage a timeout counter, to
, After 5 consecutive seconds of not receiving any messages, we break our infinite loop, and report a timeout and display how many messages we received prior to timing out.
What might make this example more interesting is to delay each go routine by a random amount of time and see if our reception of messages completes prior to receiving all of the messages.
Try modifying the function z as follows and see what happens
1 2 3 4 |
|
This more realistically demonstrates how the Non-blocking channel works when reading mesages at an unpredictable rate.
]]>First, take a look at this link describing how to compile DarkIce with mp3 support which is likely what you’re going to want.
Here’s the config file to setup DarkIce on
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
|
The key element here is the [input] section. Here we specify the Alsa device from which to source audio for capturing and encoding.
The other element in the config file that you need to configure is the [icecast-2]. Set the address (server and port) as well as the password to the icecast-2 server. Note that the default configuration for the icecast-2 server is only two (2) streams. Be sure to increase that if you’re attempting to stream more than that to the server.
If you want to encode more than one stream on your Pi, you can run multiple instances of DarkIce and use the -c flag to specify a unique configuration file for each instance.
To configure Icecast, start here. The Docs are also a good starting point too. For the impatient you can just use apt-get.
1
|
|
I’m very excited!! I randomly found a great piece of software developed by Mohit Muthanna Cheppudira that generates Musical notation and Guitar Tab.
Download the javascript or use the Chrome Extension and you can quickly encode musical notation like this:
1 2 3 4 5 6 7 |
|
And get beautifully formatted musical notation like this:
]]>Take a look at the following code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
|
Who’s ready to go (no pun intended) a step further and create an additional member to the fTyp struct and corresponding functions that will extend the meta-abstraction passing stuff.Wander as a parameter?
The things we do in the name of entertainment!
]]>I’ll hack around at it and see if I can make it a permanent part of my blogging toolbag
]]>So what do you do if you have more than one go routine? You need to communicate to all and wait for all when you quit. Here’s a contrived example that demonstrates one way to do it.
One point to note. In this example we don’t distinguish between which routines we wish to quit in any particular order. In fact, as implemented here, there is no deterministic way of knowing the order (How might you implement the code so that you would be able to deterministically know the order of go routine termination?)
Here’s an example that demonstrates how you might handle an arbitrary number of go routines:
Let’s say we start off with a constant number of routines we wish to create:
1 2 3 |
|
We will start a go routine maxGoRoutines times and then we will ensure that we wait for the same number of routines to complete by using a waitGroup
1 2 |
|
Now let’s define a simple go routine. We’ll pass a channel to let us know when to quit, a waitGroup to indicate that we’ve quit once we’ve left the routine, and an identifier to distinguish between go routines to make our demo look cool!
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
Once we’ve launed the routines, we’ll wait for the program to terminate. We’ve established a signal handler to let us know when SIGTERM or SIGQUIT by the following lines:
1 2 |
|
Next, we’ll wait to receive a signal that we’ve quit by blocking on the quitChannel. Once we receive a message indicating that we’ve quit, we’ll send a boolean true to our go routine shutdownChannel. Notice that we have to send as many messages to this channel as we have go routines. Otherwise, we’ll leave go routines hanging around and that will block us from terminating.
And finally, we wait for the waitGroup to complete. After each go routine calls its defered waitGroup.Done() function, we will unblock on the waitGroup.Wait() and can successfully exit!
1 2 |
|
Here’s the whole thing from soup to nuts!
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
|
A golang package called fsnotify does most of the nitty-gritty to make the details of implementing a queue manager somewhat trivial. Below is an example of how you would set up a queue to watch a directory using the package:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
|
In the main of the golang program, we create a new watcher and start a go-routine that selects events from the watcher Events channel. Each event is logged and then if an event of type ‘fsnotiy.Write’ is received by the watcher, an additional log message is printed. Outside of the go-routine, we add the /tmp/foo directory to the watcher and wait on the done channel (which will block indefinitely).
]]>Tired of those old boring foreground on background logs? Well, try this!
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
Okay, that’s sure to get you screamed at if you actually used it. But a more useful package used by the rainbow package is the rgbterm package. This package lets you color any text and display it on stdout.
Here’s an example using this package:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
Enjoy!
]]>I’ve put together a repo on github that is a collection of fun and interesting procedures mostly do perform graphically interesting recusive problems. Feel free to add to the collection!
I strongly suggest start by reading the User Guide and the Command Reference documents to orient yourself to the Logo dialect.
Have Fun!
]]>I’ve recently been working on a start-up that’s afforded me the need to get better acquainted with Mongodb and Go. Quickly working with these two technologies, one feels that they were meant for each other. Throw in the json package and you’re cooking with Gasoline!
The fun begins when you go wild with the use of map[string]interface{} and []map[string]interface{}. At first glance to the noob, these are daunting looking type definitions. But with a little experimenting to get you legs underneath you, you’ll find that you coudn’t live without these guys! And for good measure, their big brother, interface{}, is pretty handy too when you want to generically throw these guys around and inspect them through Go’s reflect package to see what type you’re dealing with at runtime.
Here’s the driver that I find works nicely with mongodb
More later…
]]>With a little more help, we can go beyond the standard Go packages and use a package web framework that accelerates the development of a web service stack. These frameworks do a few basic things:
1. Routing
2. Parameter Handling
3. JSON marshalling and unmarshalling
4. HTML templates and form processing
Below are two fo the frameworks that are VERY light weight and easy to use:
1. Martini
2. Gin
Basically both of these frameworks are very similar with the biggest difference being that Martini supports Dependency Injection. Gin uses a Context for its parameters.
Here’s a great article on impementing and benchmarking Bloom Filters in Golang.
Three separate types of filters are implements: Standard, Partitioned, and Scalable.
For a detailed look Read is paper.
]]>There are two things going on here: First is the handler section that processes the GET and then the section handling the POST (to be more accurate, we should check the r.Method once more to see if it’s exactly the POST method).
In the get, we generate a token to ensure that the file we receive is indeed the one requested from the GET. Although in this code, there isn’t a check to see that that’s indeed the case in the POST (left as an exercise for the reader).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
|
And the template above, upload.gtpl looks like:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
git clone https://github.com/ant0ine/go-json-rest.git