SeaJUG kicks off it's ~20th year with a talk on Apache Geode followed by talks on Web Components/Polymer, JVM language shoot-out, APM/Performance, etc. The schedule is booked out to pretty much mid-year already with some great talks!
There's also a couple of great Docker talks lined up for the year apart from DockerCon which will be happening right here in Seattle. We're hosted by Distelli for our first meet-up featuring speakers from Joyent, Microsoft and a local startup called CloudMunch.
and he's one of my peeps - super awesome :)
A history of sikhs in battle
I've been a longtime linux user (early 90s :) but all the cool kids have been using the mac these days. Apple sure knows how to make superb hardware and the UI isn't quite all that bad either though I do miss the extreme customiation I could do with some of the linux window managers. But I digress ...
Anyways, this came up recently so I thought I'd blog about it. Generally most OSes will install Java at the OS level and scatter the various parts in different places depending on the OS. Linux distro will typically put stuff in /usr/share or some other location, Windows will put stuff in c:\Program Files\ and the Mac folks seem to prefer /System/Library/... I generally find it more beneficial to just install various versions of the JDK to my user's home folder and then to just switch between them via $JAVA_HOME and $PATH.
Well, here's how you do that with the dmg mac installer:
Great set of TED talks. I really enjoyed the one on 'The puzzle of motivation'
The Seattle Java User's Group kicks off 2015 with a talk on Java and Performance. I've got a bunch of great talks lined up for the next few months - possibly Olivier Gaudin of SonarQube, Will Iverson, Ted Neward, etc.
LinuxFest Northwest is in April as usual.
No Fluff Just Stuff looks to be doing a stint in May.
DockerCon is in June
OSCON is in July.
Gonna be a busy few months :)
Cold, rainy weather in Seattle - check!
Hot software ecosystem and cloud computing - check!
150+ attendees for a packed house, grub and a professional videographer - check!
Bryan Cantrill speaking about "Some crazy flying pony pooping rainbows directly on you" - simply brilliant!
I'd highly recommend checking out docker as the next cool tech to get your geek on. Vagrant and Boot2Docker are two options if you're not on Linux.
During a talk I gave recently, the topic of whether containers could talk across different hosts came up during Q&A and I mentioned this really cool project which basically enables distributed computing leveraging docker:
also check out coreos:
mobile virtualization will likely be coming along in the near future considering the fast pace of innovation:
All kinds of action happening in the management/orchestration space above docker:
A couple of tips you may find handy if you're doing some performance tuning/troubleshooting on Java/Linux:
- invest in decent APM for collecting trend data and mining it (A customer used New Relic for example)
- strace, sar, vmstat, iostat, netstat and top are your friends, among others
- A couple of good tips for tuning the Linux network stack
- JDK 7 has a nifty tool called Mission Control ported over from JRockit
- BTrace is an AOP oriented runtime analysis tool
- Gatling is a pretty good stress tool
- set up a process to measure, research issues/fix them and iterate over and over until you know what the limits are.
- your biggest gains are going to come from eliminating inefficiencies in code Throw all preconcieved notions out the door.
You really need decent tools, profilers, etc to be able to analyze, measure and hone in on what might be causing problems. This becomes especially true when you're dealing with enterprise software which might involve scaling multiple nodes out horizontally or vertically and cluster/system level interactions.
If you're looking for a lightweight solution to help manage the Agile/Scrum process because you can't use Rally, Jira, etc you may want to take a look at:
It worked rather well for us on a team of around 8 folks. I even setup some shell scripts and a cron job to keep a back-up on someone else's computer sync'ed up every couple of hours in case we lost the primary instance. Poor man's HA =)
Went to NFJS recently. Awesome conference that I'd highly recommend you attend if you can, especially if you're into server side stuff though it was kind of nice to see the mobile additions here and there. The signal to noise ratio is always excellent. I actually used to present at the conference a loooong time ago (like, 10 years ago :). Anyways ...
Venkat should be a stand-up comedian in my opinion. He had a couple of really good talks. I especially liked his talk on features in jvm based languages (traits in scala, metaprogramming in groovy, etc) though he didn't talk much about my favorite JVM language - jython. He even mentioned tail call recursion which brought back some memories from the good ol college days :) There was an interesting side conversation on dynamic vs scripting languages but that's a never ending argument in my opinion.The concurrency talk on actor and STM based models was interesting. The Java memory model came up a few times during that talk.
As someone who has worked on both sides, the management talk was quite interesting and a bit of a refreshing change from the torrent of tech. The final shots of juice to the cortex were the sessions on Clojure and Datomic. It's always nice to brush up the state of affairs wrt lisp'y implementations.
Some other interesting items mentioned at the conference included:
I found an old writeup I had done for the deployment process I helped setup at a large health organization a few years ago. It worked really well for deploying an architecture consisting of apache/tomcat based multi-instance applications which integrated with multiple back-ends and provided a portal style interface to the end user. We were able to know exactly what version of what software was deployed in which environment and tomcat instance, when it was deployed and by whom and track it back to the source that was used to build it. We could also deploy at will anytime of the day due to the stateless architecture of the applications. All automated via Puppet (except for staging/production which was not fully baked when I moved on from the consulting gig).
Core Deploy specifies the model and process whereby a software artifact from a binary repository built by software developers is propagated through various environments in an automated fashion until it runs in production to meet the user requirements. Some of the benefits of this controlled process include:
Creating packaged build artifacts which can be deployed in various environments in an automated and repeatable fashion.
Creating a 'pipeline' for software development which includes steps such as quality assurance testing, performance testing, etc. The subsequent “Release Rhythm” section touches on this in more detail.
Providing some level of auditing to track software artifacts as they move between environments.
Initially, the software development organization has to agree on the environments that the artifacts progress through. This may consist of some of the following in order of progression:
Shared Dev – enables developers to test their built artifact to ensure that it will deploy correctly and integrate with any back-end services which they may have mocked out in their local development.
Shared QA – provides an environment for the Quality Assurance department to validate the software meets the functional requirements.
Performance – an environment to ensure that the software meets performance requirements.
Stage – basically production but which is not accessible to end users. Provides the ability to do final production validation of software artifacts before they run in production. This environment can also enable stateless automated deployments to production when paired with a load balancer.
Production – end users use the software from this environment.
Once the environments have been defined and the order of the progression of the built artifacts defined between them, the organization has to select tools to support the automated deployment of the artifacts between the environments.
Initial deployments will most likely be manual to help flesh out the deployment process. The manual steps can then be automated to provide as close to a push button experience as possible.
The deployment process specifies guidelines for who requests deployments for the environments above, how to request deployments, the notification process while the deployment occurs and some typical steps in a deployment.
Developers request deployments into the “Shared Dev” environment. Deployments to the “Shared QA” and “Performance” environments are requested by QA testers. Once software artifacts are tested and ready to be release into “Stage” and then “Production”, the Product Owner and/or the Project Manager request the deployment.
It is important to have a standard deployment request process in place. This can be something as simple as an email template or a file which contains the necessary information that needs to be filled out or something more sophisticated. Paired with archival software such as Source Code Management software, this mechanism can be used for auditing deployment requests.
A robust notification process is necessary to disseminate information as a deployment occurs. This will generally consist of emails which are sent out to the software development organization before, during and after the deployment to indicate progress. Note that this may result in a lot of emails so it would be beneficial to use filters or some other feature of email clients to archive them. The notification process should be incorporated into the deployment process in an automated fashion.
Sometimes deployments will fail for various reasons ( bugs, performance issues, unanticipated software interactions, etc). In such cases it is extremely important to have a rollback strategy which ensures that software is restored to the previous working version as soon as possible. This is extremely important if any issues are found in production, especially after some time.
This may consist of rolling back the application to a previous version and/or rolling back schema changes in a database. It could also involve a coordinated rollback if multiple services or applications have dependencies between them and need to updated or rolled back together.
Environment Configuration Changes
Most software applications have some mechanism to externalize configuration information. It is recommended that such configuration information be maintained in a Source Code Management system to enable auditing. Requests to modify configuration information should follow the same deployment process as updating application code. The configuration changes should be tested via existing automated test suites.
Been trying to get a handle on the iOS and Android versions of the OptionsHouse app. The first thing I always like to do is to look into automation for any kind of a project. It's useful for testing and just doing general lazy programmer type of things.
My first attempt was to use http://www.sikuli.org/ which is a general purpose image matching based test automation framework. It worked well enough with the simulators on Android and iOS but what I really wanted was to be able to run on the device.
So next on my list was http://appium.io/ which has really impressed me so far. I have it working with my Galaxy S3 and in the iOS simulator (fighting Apple's cert-provisioning-hoop-process atm) though it's been a learning curve and the still in development nature of the project coupled with the lack of docs is somewhat challenging.
I really ought to go to sleep .. but this is so much fun! =)
You may enjoy reading this presentation if you're into API design:
A good read and something I've experienced in some projects which sometimes need addressing:
We've got a great presentation lined up for the Seattle Java User's group on 5/21 at Expedia on Spring tech in case you are interested. Check out http://www.seajug.org for more details.
I had blogged about my awesome made for linux system76.com laptop a while back. But the support they have exhibited in providing detailed painless upgrade instructions to the latest version of ubuntu just reinforces my belief that they are one of the better choices if you're looking for a linux laptop. They seem to have upgraded some of their hardware choices too!