Hundreds of writing books get published every year, and most of those books are written by people who have written…a book on writing. I kid you not. These people have a method or a scheme or they teach a writing class—even though I have no idea how they get those gigs. (Okay, I do. They get an MFA, which universities seem to think is more important than actual writing experience.)
Those writing books have nothing in common with the writing books in this bundle. Together, the authors of the books in The Write Stuff 2016 Bundle have more than two-hundred years of writing experience, and have contributed more than five hundred award-winning and bestselling books (fiction and nonfiction) to the world of literature.
We know writing, the writing life, and what makes a writing career. And we want to share it all with you.
Even though the books in this bundle discuss craft, including one of the classic writing books of all time, most of the books you'll find here explore the career killers, things like how to put butt in chair on a regular basis, how to organize your business career, and the all-important (at least to me) how to make a living as a writer.
So if you are a writer, or are simply dreaming of becoming a writer, this bundle is for you. – Kristine Kathryn Rusch
The initial titles in the Write Stuff 2016 Bundle (minimum $5 to purchase) are:
Weinberg on Writing - The Fieldstone Method by Gerald M. Weinberg
How to Negotiate Anything - Freelancer's Survivor Guide by Kristine Kathryn Rusch
Stages of a Fiction Writer by Dean Wesley Smith
Business For Breakfast Vol 2.: The Beginning Professional Publisher by Leah Cutter
The Rational Writer: Nuts and Bolts by Mindy Klasky
If you pay more than the bonus price of just $15, you get all five of the regular titles, plus five more:
Creating an Online Presence for Writers by Cat Rambo
How to Make a Living With Your Writing by Joanna Penn
Heinlein's Rules - Five Simple Business Rules For Writing by Dean Wesley Smith
Writing the Novel from Plot to Print to Pixel by Lawrence Block
The Writer's Business Plan: A Plain English Guidebook by Tonya D. Price, MBA
The bundle is available for a very limited time only, via http://www.storybundle.com. It allows easy reading on computers, smartphones, and tablets as well as Kindle and other ereaders via file transfer, email, and other methods. You get multiple DRM-free formats (.epub and .mobi) for all books! Read more.
On the Quora website recently, a participant asked "What is the truth about 10x programmers?"
Most of the answers there were good, but misguided. Many of today’s critical programming tasks are simply too big and complex to be handled even by a 100x programmer (not to speak of maintaining what a 100x programmer produces but does not want to be bothered maintaining).
For many years now, we have understood that the programming job is not really a job for individuals, but for teams. (Yes, an individual can produce a fine small program, such as a game. I have done that myself, many times over 60 years in the business.) So what we really want is a 10x leader/teacher of programmers. If you can’t pass on you 10x abillities to others, you’re not what we want for those enormous programming tasks.
Recently, "Tommy" posted the following question on Quora, a question-answer site with lots of good stuff (and some really cheesy stuff) on a variety of topics:
How can be a faster programmer?
Tommy received a goodly portion of excellent answers, with lots of details of tips and techniques—great stuff. Reading them over, I saw that a few of the answers offered some good high-level advice, too, but that advice might be lost in all the details, so I decided to offer a summary high-level view of what I've learned about speedy programming in my 60+ years in the computer business.
First of all, read all the great answers here on Quora and incorporate these ideas, one at a time, into you programming practices. In other words, to become faster or better in some other way, KEEP LEARNING. That’s the number one way to become faster, if you want to be fast.
Second, ask yourself, with each job, “Why is programming speed so important for this job?” Often, it’s an unskilled manager who knows only a dangerous amount about programming, but can always say “work faster.” So ask yourself, what’s an appropriate speed for this job, given that speed and error are tightly related.
Third, ask yourself, “Do I really need a program for this problem?” If you don’t have to write a program at all, that gives you infinite speed. Sometimes you don’t have to write a program because one already exists that will do an adequate job. Sometimes you don’t have to write a program because a back-of-the-envelope estimate will give all the answers needed. Sometimes you don’t have to write a program because there’s no real need for the function you’ve been asked to implement.
Fourth, don’t skimp on up front work because you’re rushing to produce code. If you’re working with the wrong requirements, any nifty code you write will be worthless, so any time you spend on it will be wasted. If your design is poor, you’ll write lots of wasted code, which means more wasted time.
Fifth, when you finally have a good design for the right requirements, slow down your coding in order to go faster. We say, “Code in haste; debug at leisure.” Take whatever time you need to think clearly, to review with your colleagues, to find solid code you can reuse, to test as you build incrementally so you don’t have to backtrack.
Sixth, develop in a team, and develop your team. If you think teammates slow you down, then it’s because you haven’t put the effort into building your team. So, in fact, we’re back to the first point, but now at the team level. Keep learning as a team. Your first job is not to build code, but to build the team that builds code.
Many of these principles are developed in much more detail in my many books on software development, so if you want to learn more, take a look at my Leanpub.com author page and make some choices among about fifty books and a bundle of bundles of books. Good reading!
The gang was enjoying a BBQ Pig Out at Rudy's. It was a magical moment until Rusty and Millie started to argue about Agile software development.
Rusty started it all by saying, "Agile is magical."
Millie banged on the table with a half-chewed pork rib. "That's ridiculous. There's nothing magical about it."
"Sure there is." Rusty pulled a Sharpie out of his pocket protector and printed "AGILE" on a paper towel (which passes for a napkin in Rudy's). "There are just a few things management has to provide—like MONEY." He sketched a capital M on the towel, making MAGILE.
"Money's not enough," said Millie.
"Of course not. Management has to eliminate environmental interference.' With one smooth stroke, he crossed out the "E."
Millie frowned and shook her head, but Rusty took no notice. "And they need to Cooperate, and not just occasionally, but All the time." He added the C and A, finally producing "MAGICAL."
"Cute," said Millie, her tone sarcastic, but she was clearing struggling not to smile. "But successful projects require more than waving a Sharpie wand and pronouncing 'AgileCadabra.'"
We all knew that Rusty was pulling our legs. Millie, of course, was right. If you want to succeed with an Agile approach, you need more than magic rituals. Not only that, you need to avoid quite a few rather common mistakes that lead to failure.
Common Mistakes in Building New Things
In my experience, these common mistakes are not unique to Agile projects, but will kill Agile projects just as easily as they kill Waterfall or any other approach:
1. Committing to a schedule or cost without having any relevant experience with this type of project.
2. Using the experience on a similar but smaller project to commit to an estimate on a larger project.
3. Extending requirements to "optimize" or beat unknown competition.
4. Failing to recognize signs of impending failure and/or act on them to extend schedules, reduce costly requirements. (like those that diminish velocity by creating more frequent failed tests).
5. Failing to recognize limits of the environment or process or recognizing them but being unwilling to change them.
6. Simply undertaking too many simultaneous tasks and perhaps failing to complete any of them.
7. Not recognizing both changes and opportunities presented by a new technology.
8. Not asking the customer, out of fear, or lack of customer surrogate contact.
9. Not asking anyone for help (fear?).
10. [I invite my readers to contribute more failure dangers to this list.]
The Underlying Failure
Beneath each of these failure reasons, and others, lies one generalized failure. I explain that failure in the remainder of this article, posted as a chapter in my book, Agile Impressions.
Last week, Joe Colantonio interviewed me for his milestone 100th Test Talk. In case you haven't heard it yet, Joe extracted a few quotes and insights from this Test Talk. I've put some of them here, followed by a fascinating supporting story from one of my listeners.
Joe asked me something about secrets of how I managed to do so many thing, and I gave a couple of long-winded answers:
Maybe the secret is, a sort of middle level, is stop looking for secrets and just figure out one little improvement at a time. Get rid of things that are using your time that are not productive and that you don’t like to do. Part of it is you have to love what you’re doing and if you don’t love it, then it’s pretty hard to bring yourself back to it.
I guess the secret of being productive if there is a secret, is to adapt to what is as opposed to what should be. Of course another way to describe testing is that it’s finding out what actually is as opposed to what’s supposed to be. A program is supposed to work in a certain way and the tester finds out it doesn’t work in that way. When you report that then somebody does something about it. If you live your life the same way you’ll be pretty productive.
One Thing Testers Should Do
Joe asked about what testers should be doing that they may not be doing, and one of my answers was this:
I think that you need to highlight certain things, like I need to just sit down and talk with the developers and ask them what went on, what happened, what was interesting and so on in an informal way. This gives you a lot of clues where you might be having trouble.
History of Testing
On the history of test groups, I had this to say:
We made the first separate testing group that I know of historically (I’ve never found another) for that Mercury project because we knew astronauts could die if we had errors.We took our best developers and we made them into a group. It’s job was to see that astronaut's didn’t die. They built test tools and all kinds of procedures and went through all kinds of thinking and so on. The record over half a century shows that they were able to achieve a higher level of perfection (but it wasn’t quite perfect) than had ever been achieved before or since.
Automation in Testing
Joe's sponsor, Sauce Labs, specializes in automatic testing, so we had an interesting back and forth about test automation. Among other things, I said,
You can automate partial tasks that are involved in testing and maybe many tests and save yourself a lot of efforts and do things very reliably, and that’s great. It doesn’t do the whole job.
To which, Joe taught me the expression he uses is not "test automation" but "automation in testing." We were in violent agreement.
How to Improve as a Tester
Joe then asked me how I'd recommend someone could improve themself as a tester:
The little trick I give people is that when you find yourself saying, “Well that’s one thing I don’t need to know about,” then stop, catch yourself and go and know about that, because it’s exactly like the finger pointing that we talked about before, when the developer says, “Well that’s a module you don’t need to look at,” that’s the one you look at first. You do the same thing yourself say, “That’s a skill I don’t need, it has nothing to do with testing,” then it does and you better work on it.
How Testing is Misunderstood
There's a lot more lumps like these in the podcast, but here's one straightforward example that supports most of the things I had to say. Albert Gareevsent me this story after listening to the podcast:
"We are not NASA,"—said a Director of Development.—"My testers just need to verify the requirements to make sure that our products have quality."
What she wanted to "fix" in the first place, was that "the testing takes too long." There were a "Dev testing" phase, and then "QA testing" phase, and then "internal business acceptance testing" phase, then "client business acceptance testing" phase—sometimes even two. If bugs were caught at any point, the updated version would go through the same whole pipeline of testing. The product was indeed doing what's intended for the customers and for the company, with a very low number of incidents that were successfully taken care of.
During my initial analysis I found out that those phases were highly compartmentalized. Certain bugs had a very long life span, before the Dev team would get to know of them. Certain problems kept re-occurring. Testing was performed as a kind of a clueless Easter eggs hunt; Dev team didn't feel a need to share what code module they updated.
It appeared to me, that the product was in a good shape only thanks to these lengthy testing phases that made chances of "stumbling upon" the errors high enough to catch the majority of bugs.
So I brought my findings back to the Director, and made a few suggestions—around collaboration, feedback loops, and especially about training testers to model risks, and to test for them.
But she didn't buy it. She still thought that "they're doing too much testing."
I was puzzled. Frustrated. Even slightly offended, being a tester devoted to his profession.
And then I took a project manager for a cup of coffee.
The PM shared that the Director was in her first year being in charge of software development. That previous decade she spent as a director of customer support.
"Huh."—I said.—"All these years she saw the products that already undergone a thorough testing-and-fixing process before her team could try them."
"Exactly."—replied the PM,—"She has no idea that at first it's always messy."
More on Testing and Quality
If you want to learn more about testing and quality, take a look at some of my books, such as,
Back in the earliest days (probably before most of the readers here were born, around 1958., there was no distinction between testers and developers. They were all called programmers, and the best of them were chosen for our test group (ours, on Project Mercury, was the first test group that I know of).
Our test group was copied for a number of IBM Federal Systems projects, but over the years, people started having a different sort of test group. These groups were not made up of programmers, but were largely chosen because they would be cheaper than programmers. It was widely believed that any idiot could do testing. Many times I heard managers say they could train monkeys to sit around banging on keys.
Since that time, gradually over the years, more managers have come (ever so slowly) to realize that testing is a specialty that requires special people with special training and talent. We still have many “monkey-managers,” and for those managers, the role of testers has not changed much. But where professional testing is valued for itself, yes, the roles of tester and developer have become more similar (though not identical).
BTW, the role of monkeys hasn’t changed much, but, then, some developers play that role very well.
My father gave me a slide rule when I was about 7 years old. I used it to compute baseball batting averages, which was what motivated me at that time in my life. I still have that slide rule. It's a small, cheap slide rule, made from bamboo with plastic (or maybe ivory) faces. He bought them in quantity to give to the young ladies who computed customer bills for Sears, when my father worked for 20+ years, improving processes. The slide rules were used to check their multiplications, preventing enormous numbers of errors that had previously been sent to customers.
The next interactions were with tables of sines, cosines, and logarithms, used for more precise calculations, mostly in math classes and personal experimenting with numbers for fun.
At age 11, still in Chicago, I read a magazine article about computers, then familiarly known as "giant brains." By this time, I had been label as s "brainy" kid, and I longed to learn more about brains. I determined that computers were what I wanted to do with my life—and they turned out to be. I didn't know anyone who had ever seen a computer, let alone used one.
I watched and waited for signs of a computer, but went all through high school without seeing one. Well, not exactly. I had a summer night job in a large bakery computing recipe requirements for the following day's orders. I used a Monroe adding machine.
When I entered college at 16, I told the counselors I wanted to work with computers, but none of them knew anything about computers other than they had something to do with electrical engineering and physics. They decided I should major in Physics, because I was good in math, which would be "wasted in EE." One day, I saw a notice for a "computing course" using Monroe adding machines, given by the Monroe company. I was a short course, and I already knew most the material better than the instructor. But I passed, earning a certificate that I've lost somewhere along the way. It's the only computing course I ever took, and the only "degree" in computing that I ever earned.
My next encounter with a computer was when I looked in the mirror. I got a job in the Physics department as a "computer"—that was my job title. I was shown a Friden electromechanical calculator, which I used along with pencil and paper (and eraser) to invert 10 by 10 matrices for faculty members.
I graduated with honors in Physics, Math, Philosophy, and English, then went to Berkeley to study graduate Physics. I was perhaps two months from a doctorate in Physics (exams passed, thesis experiments finished and waiting to be written up), but I read an IBM ad in Physics Today looking for problem solvers. The ad described the work I had dreamed of since age 11, so I wrote to IBM and was hired as an Applied Science Representative—on June 1, 1956.
I was given no training, but my first assignment was to teach a programming course to three other new Applied Science Representatives who were to join IBM in San Francisco two weeks later. (I had a wife and 1.66 kids by then, so I needed the two-weeks pay, so I started early.) The first machine I encountered was an IBM 607, which was a wired program machine with 20 wired program steps (this was the expanded version) and one signed ten-digit number of data storage. In my first week, using the library of manuals, I mastered that machine plus a bunch of older punched-card machines.
I spent the next week learning to program the IBM 650, a stored program machine that kept programs and data on a magnetic drum, but had wired control panels for input and output formatting. This was all theoretical, as there was no IBM 650 anywhere in San Francisco yet. When one finally arrived shortly thereafter, it was the first stored program machine I had ever seen.
While waiting for the 650's arrival, I earned a reputation as a "whiz kid" (the term "programmer" wasn't yet in use) by making the 607 do tricks. My most impressive trick was turning on all the lights on the 607 control panel, which won me a dollar bet.
When the 650 arrived, I immediately tried out a program I had written to compute tables of sines, cosines, and logarithms. With the arrival of computers, such tables were now totally useless, but I was in nerd-heaven. I was being paid $450 a month for playing with the world's greatest toy, a job I would gladly have paid $450 a month to do—though I wisely didn't tell IBM that.