Now Playing Tracks

Using Python to Control Katello

Emacs editor with python code

I usually like to use python to script my day to day tests against Katello (you may have seen some of my previous posts about using the Katello CLI for the same purpose) and I figured I’d start showing some basic examples for anyone else out there who may be interested.

Assuming you have already installed and configured your Katello instance (learn how to do this here) with the default configurations, we now have a few options to proceed:

  1. write and run your scripts in the same environment as your server
  2. install the katello-cli package (pip install katello-cli)
  3. Use git to clone the katello-cli repository (git clone https://github.com/Katello/katello-cli.git) and make sure to include it into your PYTHONPATH.

Option 1 is by far the easiest approach since you should have all the dependencies (namely kerberos and M2Crypto) already installed, but I like Option 3 as it allows me to always have the latest code to play with.

Now we’re ready to write some code! The first thing we’ll do is import some of the Katello modules:

 from katello.client import server
 from katello.client.server import BasicAuthentication
 from katello.client.api.organization import OrganizationAPI
 from katello.client.api.system_group import SystemGroupAPI

Next, we establish a connection to the Katello server (qetello01.example.com in my case), using the default credentials of admin/admin:

katello_server = server.KatelloServer(host='qetello01.example.com', path_prefix='/katello/', port=443)
katello_server.set_auth_method(BasicAuthentication(username='admin', password='admin'))
server.set_active_server(katello_server)
 
Let’s now instantiate the Organization API object and use it to fetch the “ACME_Corporation" that gets automatically created for a default installation:
 
organization_api = OrganizationAPI()
org = organization_api.organization('ACME_Corporation')
print org
{u’apply_info_task_id’: None,
u’created_at’: u’2013-09-12T20:15:06Z’,
u’default_info’: {u’distributor’: [], u’system’: []},
u’deletion_task_id’: None,
u’description’: u’ACME_Corporation Organization’,
u’id’: 1,
u’label’: u’ACME_Corporation’,
u’name’: u’ACME_Corporation’,
u’owner_auto_attach_all_systems_task_id’: None,
u’service_level’: None,
u’service_levels’: [],
u’updated_at’: u’2013-09-12T20:15:06Z’}

Lastly, let’s create a brand new organization:
 
new_org = organization_api.create(name='New Org', label='new-org', description='Created via API')
print new_org
{u’apply_info_task_id’: None,
u’created_at’: u’2013-09-12T21:48:55Z’,
u’default_info’: {u’distributor’: [], u’system’: []},
u’deletion_task_id’: None,
u’description’: u’Created via API’,
u’id’: 283,
u’label’: u’new-org’,
u’name’: u’New Org’,
u’owner_auto_attach_all_systems_task_id’: None,
u’service_level’: None,
u’service_levels’: [],
u’updated_at’: u’2013-09-12T21:48:55Z’}

As you can see, it is pretty straight forward to use python to create some useful scripts to drive a Katello server, whether you want to populate it with a pre-defined set of data (e.g. default users, roles, permissions, organizations, content, etc) or to test core functionality as I do with Mangonel, my pet project.
 
Here’s a Gist of the code mentioned in this post, and let me know if this was useful to you.

Populating a Katello instance using the CLI

Lately I have been asked a lot about my previous script to automatically populate a Katello server instance with real data (hi reyc!) I wrote that a while back and though it still does contain some helpful commands, I figured it was about time I updated it. Well, it took me longer than I expected to find some time and clean it up, but I think I can now show you a brand new script which also includes the extra feature of downloading a manifest file directly from Red Hat's portal and importing it as part of the process.

Currently the script assumes that you have the following information (either as environmental variables or substituted into the script:

RHN_USERNAME: A valid username for https://access.redhat.com/

RHN_PASSWORD: A valid password for https://access.redhat.com/

DISTRIBUTOR: An existing distributor UUID with access to Red Hat Enterprise Linux 6 Server products

The new script is know to work with the very latest Katello nightly build. If you have any suggestions or constructive feedback, feel free to leave me a comment here or fork the gist and send me a pull request!

Adventures in the Music Streaming World

These last couple of years have brought (along with some new wrinkles and occasional grey hairs) some interesting changes on how I manage and maintain my “digital belongings”. For a long while I used to worry about backing up and storing in a safe place all the files, photos, books, movies and music I’ve collected through the years. I have also managed to collect a variety of different external USB hard drives to keep up with this digital sprawl, where for each iteration the next device would increase in size and sometimes in speed compared to the current one. It got to a point where I got tired of the backup/restore game and found myself paying less and less attention to the things I had spent so much time maintaining.

Music was one of the last items that I eventually stopped backing up. One day last year I took my entire collection of approximately 9000 (legal) songs and uploaded them to Google Play! The very next time I re-installed my desktop I didn’t have to restore my music collection anymore. All I needed was a net connection and I was off to listening to my tunes! I also had full access to that collection via my Android phone and laptop! Triple yummy! Sure, without net access I would be out of luck, but I could always keep a smaller subset of my collection around if I wanted to listen to anything while offline.

After a while I noticed that I seldon played my own tunes, often spending more and more of my music listening minutes on sites such as Pandora and Grooveshark to experience new types of music and genres. At one point I found myself subscribing to Pandora, Rdio and Spotify, looking for what in my opinion would provide me with the best listening experience. After about a month I think I have finally found the combination that gets me closer to that goal: Pandora + Spotify.

A little background though. I have been a Pandora (non-paying) customer for many years now and I can’t say enough about the music quality and the variety that you can get for this service! I mean, for the completely FREE option (with the occasional advertisement) you get to listen to (what feels like to be) hand picked songs that match whatever criteria you can come up with! Be it a song, a word, an artist or an album, Pandora’s matching algorithm is by far the best I’ve seen out there. Period! It is because of this plus the fact that I can access it from anywhere and any device with net access that I became a paid customer.

But how about those times when I specifically want to listen to a given song or album or even make a playlist with some of my favorite jams? After a while I learned a nice trick that lets you sample an album from Pandora but that wasn’t enough for what I wanted to do. So Grooveshark was definitely a great find for me and for a while I really enjoyed the freedom and wide selection that it offered me for free. Something really simple but it also made a difference for me was that I could “scrobble” the music I listened to my Last.fm account, something that Pandora doesn’t do. But alas, I couldn’t listen to my playlists on the go or even using my phone, so I started looking for options.

Now Rdio impressed me right away for being exactly what Groveshark was but with the added capability of being available on multiple platforms, and including some of the newest and latest releases! The pricing model was a bit more expensive than Pandora, but it did give me the ability to create my own playlists and interact with my friends via different social networks. I definitely enjoyed the experience and would have stuck with if it wasn’t for the small music collection that is available right now. I understand that Rdio tries to add as many new (and old) titles as it can, but at the end of the day, I couldn’t always find what I was looking for.

Spotify was the “dark horse” during my experimentation, mostly because it didn’t offer a first class client for Linux. There was a “half baked” client out there that never worked for me/or crashed too many times… I even ran the Windows client via Wine for the first 2-3 weeks but it felt “dirty” to pay for a service that would not run natively or provide a decent support for my platform. The Android and iOS apps worked like a charm, but I spent the bulk of my days in front of a Fedora box and listening to music from my phone was not going to cut it for me. The music variety is definitely much larger that what Rdio offers and it even has its own “Radio” streaming that provides something similar to what Pandora does. But the matching algorithm is still light-years behind Pandora and I often found myself wondering how certain songs and genres ended up in the “station” I was listening to.

After about a month into the experiment, it looked like I was going to keep Pandora and Rdio to get great music selection and variety (Pandora), web front end access and multi-platform support (Pandora, Rdio), and playlists (Rdio)… until a co-worker mentioned that Spotify had just announced their web based player! All of a sudden Spotify went from being in last place to bumping Rdio out of the equation!

Spotify web player

So now I am using both Pandora and Spotify at home and on the go (Spotify lets you download your playlists for offline listening) and so far the experience has definitely been positive. I feel that the streaming quality and variety has provided me with many enjoyable hours of music while I work and even my kids have started experimenting with Pandora as they get more exposure to the musical world. And if I ever feel like listening to some of my own music, some of which is not yet found on Spotify, I can always turn to Google Play… and I definitely enjoy not having to manage my backups anymore. :)

Perks of being a polyglot

Yesterday I had one of those “once in a a lifetime” opportunities, thanks to my wife who dragged me to a presentation hosted by the University of North Carolina. The presentation by Dr. Eduardo Torres Cuevas, titled “Preserving Cuba’s Cultural Heritage in the 21st Century” attracted a small gathering, apparently mostly made up of UNC staff and students who are currently enrolled in one of their languages courses. I wasn’t really sure what to expect from it, but being the supportive husband that I am, I signed off from work a bit earlier and together with our 2 kids we drove to the main campus.

The entire lecture was done in Spanish while a translator tried her best to keep up with Dr. Cuevas’ detailed and humored style of prose, as he told us about the history behind Cuba’s National Library. I must tell you, being a “real time” translator is not an easy task, and though Dr. Cuevas tried to slow things down so she could do her job in a timely fashion, she still had to summarize a lot of what was being said in order to keep up with him. Sadly, a lot of the “good stuff” was never mentioned and/or was completely lost in translation.

Dr. Cuevas was an excellent speaker and was able to quickly put everyone at ease right away and transport us to the Havana of the 1950s. The entire lecture lasted approximately 50 minutes but in reality it felt more like a small fraction of that! I could tell that he is extremely passionate about preserving the Cuban cultural heritage for posterity, and together with a strong character and charisma, I dare say that everyone in attendance was completely captivated by his stories as if in a trance.

Now, let me remind you of one minor detail I mentioned early on that you may have overlooked: everything was in Spanish, a language which though not completely unfamiliar to me, is not my first or second language in order of fluency (Brazilian Portuguese and English in this order if you’re wondering what my first and second languages are). Sure, there was a translator, but after the first 3 minutes I completely ignored her voice and focused completely on Dr. Cuevas. Eventually, the translator’s voice became white noise or just an annoyance, as we would have to stop the flow of the lecture in order for her to turn beautiful story telling-style prose to a short. almost. dry. list. of. facts. and. numbers!

I feel extremely lucky that I was able to not only meet but hear someone who may be a very important person in the history of Cuba. I also feel extremely lucky that I was able to follow and understand the entire presentation in the “original format” without the need for “caption” or any other aid. Had I relied solely on translations, I would have never really gotten all the little jokes, nuances and true meaning behind the words uttered by the gentleman from Havana… and most likely would not feel this great urge I now feel to make Cuba, and specially the National Library of Cuba, my choice for the next time I go anywhere outside the States for vacation!

(Source: bnjm.cu)

Red Hat: 366 days later (and counting)

Red Hat 1 year

Woke up to the following email this morning:

Dear Og Maciel,

Congratulations on your one-year anniversary with Red Hat! Thank you for your commitment and work over the past year. We hope that it has been everything you expected it to be and look forward to celebrating your future success with the company.

Time sure flies when you’re having too much fun! I can’t believe it’s been one whole year since I joined Red Hat as a Senior QA Engineer to work on their CloudForms project! So much has happened since then that it is a bit hard to remember all the new and exciting things I had the pleasure of being a part of! It is absolutely grate to be able to work on such a cool and challenging environment, and to experience first hand what open source and meritocracy really means!

I remember being asked during my interview process what my long-term goals were (or something along these lines), and my answer was:

"In the next 5 years I want to be the Go To person to all questions related to CloudForms!”

Well, one year down and four more to go! :)

Extending the default EC2 root partition: Follow up

EC2 wizard

Just wanted to follow up on my previous post in regards to how to resize the root partition of an EC2 instance. Turns out that, once you’ve edited the root partition while in the launch panel, you can then perform the resize command right away, as soon as the instance is up and running and you have ssh’ed to it.

[root@ip-aa-bb-cc-dd ~]# resize2fs /dev/xvde1

This is definitely better than what I thought one had to do to get a bigger root partition.

Extending the default EC2 root partition for an instance

EC2 wizard

Today I was playing with EC2, trying to launch a RHEL 6.3 instance so that I could then install the latest version of Katello and beat a bit on it… just for fun, you know? Using the EC2 Management Console web interface I used the “classical” wizard to select all the components I wanted for a m1.large instance, making sure to edit the default 7.5 GB root partition they give you so that I could have more space available to synchronize content… but when the instance finally came up I realized that my disk space was still showing the default value:

[root@ip-aa-bb-cc-dd ~]# df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/xvde1            5.7G  1.7G  4.0G  30% /

none                  3.7G     0  3.7G   0% /dev/shm

I wrecked my brain about that for quite some time, retracing my steps and even terminating my instance and starting from scratch a few times, thinking that perhaps I had missed an obvious step. Eventually I came across a few posts online and was able to solve my problem, which I will try to describe below. Obviously, feel free to read the original posts for more information. The steps are as follows:

  1. Once your instance is up and running, stop it selecting the Stop option from the Management Console window.
  2. Now, switch to the Elastic Block Store section and select your instance’s volume from the  Volumes subsection.
  3. Detach the selected volume.
  4. Select the option to create a snapshot off of the detached volume.
  5. Switch to the Snapshot subsection and select the newly created snapshot.
  6. Select the Create Volume option and create a larger volume.
  7. Go back to the Volumes subsction, select the newly create volume and attach it as the root volume for your instance (should be /dev/sda1).
  8. Restart your instance.

As soon as the instance is back online, ssh to it and verify that the disk size has not changed:

Now, resize the root partition so that it can “absorb” the larger volume we created:

[root@ip-aa-bb-cc-dd ~]# resize2fs /dev/xvde1

resize2fs 1.41.12 (17-May-2010)

Filesystem at /dev/xvde1 is mounted on /; on-line resizing required

old desc_blocks = 1, new_desc_blocks = 5

Performing an on-line resize of /dev/xvde1 to 20971520 (4k) blocks.

The filesystem on /dev/xvde1 is now 20971520 blocks long.

If everything goes well you should now see a much larger disk available (a 80GB volume in my case):

[root@ip-aa-bb-cc-dd ~]# df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/xvde1             79G  1.7G   77G   3% /

none                  3.7G     0  3.7G   0% /dev/shm

The following posts were helpful to me:

PS: After writing this I wonder if I’m expected to run resize2fs after editing the root partition during the wizard process and starting the instance… will try that next time.

Tough Times

These last few days have been rough for my family due to a very unfortunate event. About 10 weeks ago my wife and I learned that she was pregnant, something that we both wanted very much since our last child was born in 2007. So for the last 10 weeks we have been taking all the necessary steps to prepare our daughters, our house and our work schedules to welcome another child into our family. We also decided that we would break the news very slowly at first, making sure to share the news with immediate family and close friends and planning to tell everyone else after we had our first ultrasound.

That first ultrasound happened this last Friday, and for the first time we also had the kids with us in the room so they would be the first ones to hear their baby sister/brother’s heart beat… but we didn’t. After 5 minutes of unsuccessful attempts to grab the slightest sound or sign of a heart beat or even the faintest sign of movement, our doctor recommended that we try the ultrasound at the local hospital, as they had a more advanced machine and would be able to give us a more concrete diagnostic.

So we spent Saturday and Sunday agonizing over what would happen Monday morning when we were scheduled for the next session… and tried our best to have as much of a normal life one can have, trying to shield our children from the anxiety and 1000 horrible thoughts that kept creeping through our heads.

Yesterday we managed to get someone to watch over our kids and headed early to UNC Hospital for what we expected would be a life changing moment. I guess I could write about how we, my wife and I, agonized every single second until we were finally taken into the ultrasound room. How we held our breath waiting as the person operating the machine performed some routine checks and measurements, before she finally switched to the test we cared the most… and how heart breaking (how ironic) it was to hear only the silence coming off the heart monitor.

Needless to say we are very much heart broken and still trying to digest what just happened to us. I believe I am coping a bit better than my wife, but this feeling that I have things under control comes and goes in waves. She keeps going over and over what she could possibly have done differently, or what she may have done to endanger the life of our unborn child.

Late afternoon yesterday I broke the news to our oldest daughter, and after the initial shock she seems to have taken the news as well as a child is expected to take. She has been very supportive and is always telling her mom how much she loves her. However, I haven’t told our youngest one yet… She’s been looking forward to sharing her room with a new sibling and I don’t feel that we’re ready to tell her yet… but that day will most likely be some time this week.

As my wife had what the doctor called a “missed miscarriage”, the 10-week-old baby is still inside her and we now have to decide how to proceed with the removal and disposal of what once was a living human being. Tough decision to make, I assure you!

Our immediate family has been aware of the situation and has been extremely supportive, even though they’re spread across NY and NJ. Our pain is still very much fresh and we’re still debating what the next step will be. With Halloween literally around the corner, we will also have to make sure our kids are not affected by that next step, whatever it may be.

So I wanted to share a bit of our most intimate news with you all who follow me and my friends who near or far are very important to us…

Error: uninitialized constant Heroku::API (NameError)

Got stuck with this error on Fedora 17 while trying to get a project onto Heroku:

Error: uninitialized constant Heroku::API (NameError)

After some quick back and forth with the Heroku guys I finally got it fixed:

$  rvm uninstall 1.9.3 
$  rvm reset
$  sudo yum reinstall openssl openssl-devel
$  rvm install 1.9.3 
$  rvm use 1.9.3 —default
$  heroku login

Hope this helps someone else :)

(Source: github.com)

Pylyglot: Open Source Translation Search

It’s been a while since I wrote about Pylyglot, my translation searching tool that I use whenever I translate open source applications. Have not heard about Pylyglot? Read the About page for more info!

The reasons for the long hiatus are too many to enumerate, but suffice to say that the project is very much alive and I intend to keep updating the translations database as often as possible.

So, what’s new? For starters, there’s a new and fancy language selector that let’s you also search for a language as you type:

Language selector

There’s also support for pagination as searching for certain terms could return quite a bit of results:

Pagination

Lastly, the translations are only a couple of weeks old and should be updated again soon!

One thing that I removed was the ability to ask Google Translate to provide the translation for a term. It is very unfortunately for I was also working on a feature that would let you upload a translation file (e.g. nautilus.po) and get back a fully translated file, where all strings that were changed automatically were flagged as “fuzzy” by default so that someone could then review the end result. But alas, due to the somewhat recent changes to Google Translate policy, I decided that it was not worth supporting this feature anymore.

Work and my personal life has kept me busy but I still intend to maintain this project alive and hopefully turn it into something useful for those who work on translations. There are a few things that I’d like to do, such as update the code to be more Django 1.4 “compliant” and turn the process of updating translations more dynamic and “on demand”. Unfortunately hosting Pylyglot on a shared Dreamhost environment and using a “shotgun” approach doesn’t work well as they have some very restrictive memory consumption threshold for individual processes.

Anyhow, feel free to fork the code and send your suggestions (or pull requests). :)

We make Tumblr themes