The First SF Redis Meetup ([you should give it] 12 minutes to read)
Having taken place more than 6 years ago, this isn’t exactly what you’d call news. Still, while reading Dave Jensen @djensen47‘s writeup of that meetup, I could literally imagine myself being then and there. Not really news, some things remained unchanged whereas others didn’t, but if you’re a sentimental fool like me, the community vibe will resonate strongly with you too despite all the years that have passed.
One of the improvements coming to Redis cluster is the ability to balance the shards across its nodes. After committing this feature, Salvatore Sanfilippo @antirez shows it in this short video. BTW, with today being yesterday’s tomorrow, here’s another great ASCII movie.
@EverythingMe‘s closing down is sad but the company is intent on increasing the general goodness in the world even as it folds. Besides letting go of their extremely talented team (for other companies to instantly hire), the company is also open sourcing a lot of their internal projects (for other companies to instantly start using). Medusa, by Dvir Volk @dvirsky, is one such project that is a “cross-language, cross database, loose schema NoSQL data store, with an Object Mapper for easy querying and code integration” on top of Redis – check it out even if you don’t need something quite like that right now, it is inspiring.
Martin Fowler @martinfowler‘s voice is loud in the internets and is thought provoking as usual. ListAndHash is, as the name suggests, a hybrid data structure made up of lists and hashes. As Redis has both (and IIUC the internal implementation also consists of a mix of both), would it make sense adding it to Redis? Building a ListAndHash on top of Redis’ current API shouldn’t be too hard (I’ll leave that as an exercise to you, the reader) but does it deserve a place alongside the other basic data structures? I have mixed sentiments on the subject: one the one hand, it appears useful (“common”) but on the other hand it is a tree-like nested data structure that kind of goes against the Redis spirit. A penny for your thoughts?
OH Data Mafia @datamafia > “standards before victory”. Local #Redis terminal is ALWAYS RED! #programming
migr8 is a utility for doing concurrent migration of Redis databases. It comes with an introductory blog post – Introducing migr8 a Concurrent Redis Migration Utility Written in Go and a r/redis thread from Adam Enger, Erik Benoist @erikbenoist and Kyle Crum @kylecrum via Reverb @reverbdotcom.
Every time that an analytics package gets an update is an occasion for celebration for sum of us, and this celebration is the courtesy of Amir Salihefendic @amix3k from @todoist who bumps a version to our cohort’s joy. With both popular analytics solutions for Redis getting major functionality updates within weeks of each other, can we dare to hope that the trend will last?
I’m more of a CLI type of guy – I rarely do complicated development 🙂 – but if you’ve ever used a modern IDE, then I’m sure you’re already familiar with @jetbrain‘s impressive solutions. If you’re actually using them, then this early assessment release should make you giggle with joy – a #NoSQL plugin from David Boissier @dboissier that lets you play with your Redis/MongoDB/Couchbase database in the comfort of your windowed heaven.
Everybody knows that leaving an unprotected server open to the world is a bad idea, but this report from Kevin Chen @kevinchen demonstrates just how bad it really is. I found the pathology fascinating to read, but the first lesson you should take from this is always protect your servers from unauthorized access.
A short research into the pros and cons of each as a communication platform (for a robot net, of course).
A nice war story from Microsoft’s cloud that delivers the required performance using the ubiquitous answer. My favorite quote: “Because the majority of our solutions are built using the Microsoft stack, it makes sense that we leverage Microsoft SQL Server most of the time. However, the SLA requirements for this solution specified a service response time of less than 1 second per request, at a rate of about 6,000 requests per hour, on a dataset with more than 50 million records. Because traditional databases like SQL Server store data to disk and are bottlenecked by IOPS, we couldn’t guarantee that every query would pass that requirement. To further complicate the matter, the subset of data we needed to expose already belonged to a traditional database containing terabytes of unrelated data. For that reason, we began evaluating solutions that could support large datasets quickly. That’s when we discovered Redis.”