Object Thinking

David West

Mentioned 9

In OBJECT THINKING, esteemed object technologist David West contends that the mindset makes the programmer--not the tools and techniques. Delving into the history, philosophy, and even politics of object-oriented programming, West reveals how the best programmers rely on analysis and conceptualization--on thinking--rather than formal process and methods. Both provocative and pragmatic, this book gives form to what's primarily been an oral tradition among the field's revolutionary thinkers--and it illustrates specific object-behavior practices that you can adopt for true object design and superior results. Gain an in-depth understanding of: Prerequisites and principles of object thinking. Object knowledge implicit in eXtreme Programming (XP) and Agile software development. Object conceptualization and modeling. Metaphors, vocabulary, and design for object development. Learn viable techniques for: Decomposing complex domains in terms of objects. Identifying object relationships, interactions, and constraints. Relating object behavior to internal structure and implementation design. Incorporating object thinking into XP and Agile practice.

More on Amazon.com

Mentioned in questions and answers.

What concepts in Computer Science do you think have made you a better programmer?

My degree was in Mechanical Engineering so having ended up as a programmer, I'm a bit lacking in the basics. There are a few standard CS concepts which I've learnt recently that have given me a much deeper understanding of what I'm doing, specifically:

Language Features

  • Pointers & Recursion (Thanks Joel!)

Data Structures

  • Linked Lists
  • Hashtables

Algorithms

  • Bubble Sorts

Obviously, the list is a little short at the moment so I was hoping for suggestions as to:

  1. What concepts I should understand,
  2. Any good resources for properly understanding them (as Wikipedia can be a bit dense and academic sometimes).

I find it a little funny that you're looking for computer science subjects, but find wikipedia too academic :D

Anyway, here goes, in no particular order:

As a recent graduate from a computer science degree I'd recommend the following:

Some of the OS concepts

 ( memory, IO, Scheduling, process\Threads, multithreading )

[a good book "Modern Operating Systems, 2nd Edition, Andrew S. Tanenbaum"]

Basic knowledge of Computer networks

[a good book by Tanenbaum

OOPS concepts

Finite autometa

A programming language ( I learnt C first then C++)

Algorithms ( Time\space complexity, sort, search, trees, linked list, stack, queue )

[a good book Introduction to Algorithms]

I have struggled for so long to find a compelling use case for workflow (ie: WF) as against regular imperative programming. Each time I fall back to the conclusion that I should just leave WF out or defer getting into it until later. But I keep having this nagging feeling that there's something am missing.

Does anyone know any book that truly makes a strong case for the Workflow way? The book has to (i) teach WF well, and (ii) show using appropriate use cases that WF made an implementation easy to do than if we just did our regular straight coding.

I will appreciate it.

Not sure if there is a good answer to your question. The problem isn't that the question isn't valid or anything like it because you are are asking for two very different things.

First of all you ask for compelling reasons to use workflow. This is a very subjective question and not technology related at all. You can find white papers on the web pointing to all sorts of successful, and unsuccessful for that matter, workflow implementations. This is regardless of technology, an solution that was done using some product X could just as well have been done using product Y. The chapter from Shukla and Schmidt certainly explains the fundamentals but I am not sure it is a good book showing you where and how to apply workflow.

Secondly you are looking for a book to teach you Windows Workflow Foundation. First question is WF3 or WF4 as they are very different beasts. I will assume WF4 because that will replace WF3 when .NET 4 is released (real soon now) and starting with WF3 doesn't make a lot of sense in most cases. But as WF3 never was very populair and the book market not very profitable for most writers there are no WF4 books out yet. I believe Bruce Bukovics is working on a new version of his Pro WF: Windows Workflow in .NET 3.5 book which I found one of the more usable WF3 books. So far there is nothing though and you are stuck with the, extremely limited, docs on the msdn site and blog like mine here. And of course there are course out there like this from DevelopMentor (note: shameless plug as I am the lead course author)

I did provide a number of reasons in this answer here, these might help you some.

Not quite an answer to you question but I hope all this will still be useful to you.

I don't know of a book specifically about this topic. However, I believe that part of the appeal of WF (or other workflow products) is that it reintroduces the possibility of the loosely-coupled, message-based paradigm that the original OO guys (such as Alan Kay) were interested in.

The concept of a "message" being passed is not immediately obvious in WF. However, the notion of objects acting as discrete machines is.

For an awesome (but somewhat crazy) book about the state of OO, see Object Thinking by David West. And see here for Alan Kay's discussion of what OO means for him.

I’ve almost 6 years of experience in application development using .net technologies. Over the years I have improved as a better OO programmer but when I see code written by other guys (especially the likes of Jeffrey Richter, Peter Golde, Ayende Rahien, Jeremy Miller etc), I feel there is a generation gap between mine and their designs. I usually design my classes on the fly with some help from tools like ReSharper for refactoring and code organization.

So, my question is “what does it takes to be a better OO programmer”. Is it

a) Experience

b) Books (reference please)

c) Process (tdd or uml)

d) patterns

e) anything else?

And how should one validate that the design is good, easy to understand and maintainable. As there are so many buzzwords in industry like dependency injection, IoC, MVC, MVP, etc where should one concentrate more in design. I feel abstraction is the key. What else?

To have your design reviewed by someone is quite important. To review and maintain legacy code helps you to realize what makes the software rotten. Thinking is also very important; One one hand don't rush into implementing the first idea. On the other hand, don't think everything at once. Do it iteratively.

Regular reading of books/articles, like Eric Evan's Model Driven Design, or learning new languages (Smalltalk, Self, Scala) that take different approach to OO, helps you to really understand.

Software, and OO, is all about abstractions, responsibilities, dependencies and duplication (or lack of it). Keep them on your mind on your journey, and your learning will be steady.

It takes being a better programmer to be a better OO programmer.

OO has been evolving over the years, and it has a lot to do with changing paradigms and technologies like n-tier architecture, garbage collection, Web Services, etc.. the kind of things you've already seen. There are fundamental principles such as maintainability, reusability, low coupling, KISS, DRY, Amdahl's law, etc. you have to learn, read, experience, and apply it yourself.

OO is not an end on its own, but rather a means to achieve programming solutions. Like games, sports, and arts, practices cannot be understood without principles; and principles cannot be understood without practices.

To be more specific, here are some of the skills that may make one a better programmer. Listen to the domain experts. Know how to write tests. Know how to design a GUI desktop software. Know how to persist data into database. Separate UI layer and logic layer. Know how to write a class that acts like a built-in class. Know how to write a graphical component that acts like a built-in component. Know how to design a client/server software. Know networking, security, concurrency, and reliability.

Design patterns, MVC, UML, Refactoring, TDD, etc. address many of the issues, often extending OO in creative ways. For example, to decouple UI layer dependencies from logic layer, an interface may be introduced to wrap the UI class. From pure object-oriented point of view, it may not make much sense, but it makes sense from the point of view of separation of UI layer and logic layer.

Finally, realizing the limitations of OO is important too. In modern application architecture, the purist data + logic view of OO doesn't always mesh very well. Data transfer object (Java, MS, Fowler) for example intentionally strips away logic part of the object to make it carry only the data. This way the object can turn itself into a binary data stream or XML/JSON. The logic part may be handled both at client and server side in some way.

Something that's worked for me is Reading. I just had a Bulb moment with this book... David West's Object Thinking which elaborates Alan Kay's comment of 'The object revolution has yet to happen'. OO is different things to different people.. couple that with with the fact that your tools influence how you go about solving a problem. So learn multiple languages.

Object Thinking David West

Personally I think understanding philosophy, principles and values behind a practice rather than mimic-ing a practice helps a lot.

I was reading this paper from Apple:

http://developer.apple.com/library/mac/documentation/cocoa/conceptual/OOP_ObjC/OOP_ObjC.pdf

where it talks about OOP which I never heard before. I graduated in computer science around 1991, before OOP becoming popular, so the use of OOP was merely defining some classes, and then calling the methods, that's it. Objects didn't interact with each other -- everything was done in a main function that calls the various objects' methods.

Until I read the paper above, which talks about Interface, dynamic typing, dynamic binding, that an object can send another object a message, even before the second object is invented -- only the "interface", or the message, needs to be well defined. The second object can have unknown data type as of right now, to be invented in the future, but all it needs to do is to understand the "message".

So this way, each object interacts with one another, and each object may have a list of "outlets" which are the relationship it has with the outside world, and the object will interact with the outlets by sending them messages, and those objects, when getting a message, can in turn send back a messages to the sender. (send a message to an object = call the object's method).

I think this sort of opened my eye for OOP, much more than even the Design Pattern book by the Gang of Four. The Apple paper didn't cite any source, but I wonder it might follow some methodology from a book? Does any OOP book give a good, solid foundation in OOP which is what the Apple paper is talking about?

I am from .Net background and I am planning to read the following book to address this question.

Foundations of Object-Oriented Programming Using .NET 2.0 Patterns - Christian Gross

What I am finding interesting about this book is

  1. Use of generics
  2. Explaining patterns as a solution to a problem

Nice introduction to OOP is "Coffee maker" (and quite short).

I personally really enjoy reading "Object thinking".

Another interesting book is "Domain-Driven Design: Tackling Complexity in the Heart of Software".

Next in my to-read list is "Object Design: Roles, Responsibilities, and Collaborations".

I understand procedural programming (well, who doesnt) and want to get a good understanding of OOP and after that functional. I'm just a hobbyist so it will take me an age and a day, but its fun.

Does anyone have any ideas for what I can do to help? Project ideas? Example well documented code thats out their?

I am currently using C++ but C# looks a lot nicer to work with.

I'd recommend working mainly with a strongly-typed language like C# or Java, as so many of the design patterns and general OOP principles are geared towards strong typing (GOF, Refactoring, Uncle Bob). Ruby is wonderful but a lot of common OOP principles like coding to interfaces won't apply.

Spend some time with Uncle Bob's SOLID principles. Go slowly, paying particular attention to Single Responsibility. If you don't get anything else from Uncle Bob, get SRP in your head and apply it early and often.

I also like Uncle Bob's idea of code katas. I'd recommend working through the bowling game kata.

There are some great OOP books from Head First covering Object-Oriented Analysis and Design and Object-Oriented Design Patterns.

I recommend you read Object Thinking by David West. There is very little code in the book, but a lot of talk about how to model.

A couple things I wish someone would have told me when I was starting out are:

  1. When modeling objects, you should focus on behaviors more than the shape of the data.
  2. Although many OO tutorials model real world things such as animals and vehicles, many of the things we model in OO software are concepts and constructs (things that have no physical representation in the real world).

As you may have guessed from the question - I am right at the beginning of the Obj-C journey.

I'm hoping that someone out there knows of some diagrams that depict the relationship between classes, objects and methods - and that they're willing to share.

The problem I'm having is that just looking at code in a textbook doesn't completely explain it - for me at least.

Thanks for reading!

Regards, Spencer.

To some extent, diagrams may not be that helpful to answer the questions you present.

It may help to think of things like this:

A "class" provides the prototype or definition for some thing. For example, a "Person" or a "Car". A common synonym for "class" is "type".

An "object" is a concrete example or instance of a class. For example, you are an instance of "Person", and your car is an instance of "Car".

A "method" is a behavior, action or property of a class. However, a method is normally only meaningful in the context of an object. "Person" -> "Eat" is not meaningful, but "you" -> "Eat" is.

These are fundamental Object-Oriented concepts that are not specific to Objective-C. If you are interested in a general overview that is language-agnostic, I recommend "Object Thinking" by David West. Even though it's from Microsoft Press, it covers the concepts rather than any specific language.

I know C,C++,COBOL.

Now I am trying to learn C# and I want to do some hobby projects with C#.

So can you suggest where do I start from.

I searched on google but I want to start from a book which gives me more practice problems for a new comer to .net

Can anybody suggest a great book online which I should really start from?

Everyone learns a little differently, so I've included a few different types of resources.

Books

It seems that one of the biggest challenges for a COBOL programmer to move to a language like C# is the Object-Oriented way everything is done. "Everything is an object" is a pretty good generalization within C#, and certainly good enough for a beginner. So, the first suggestion is a book about Object Thinking. It attempts to introduce objects for a philosophical and historical perspective. It specifically discusses some of the differences between procedural languages and OO languages. Now, it is a bit academic (written by a professor) but there's a good fundamental basis here.

Once OO is understood, there are a number of C# books available. Many people recommend Richter's CLR via C# which is a spectacularly good book. If the person is a veteran to CS, you can't recommend a better, more thorough book on C# and the CLR. For a more "approachable", feature-oriented way, I've always found Troelsen to be excellent.

Websites

Another approach is to compare and contrast syntaxes. Someone fluent in COBOL will think in COBOL when first attempting to write C#. So here is an article on CodeProject that does a side-by-side comparison of VB.NET, C#, and COBOL. It's not a complete overview, but it could be a good reference for someone trying to figure out how to, let's say, write a loop in C#. There is also this blog post which is more to the effect of taking C# and converting to COBOL. Still, the comparison between the two may be helpful.

Training

For those that need instructor-led courses, Microsoft offers Getting Started with Microsoft .NET for COBOL Programmers. Exactly where this would be offered, however, may be a challenge.

Coding

Fujistu makes a cool product called NetCOBOL for .NET. Nothing beats writing code. Here, you can write COBOL code within Visual Studio to produce Microsoft's Intermediate Language (MSIL) which runs on the CLR (how cool). Using this, a COBOL programmer could write OO COBOL, but leverage the .NET Framework. Perhaps using this, you could go the next step and use Reflector to decompile the IL into C#, VB, etc. The web-site doesn't list a price which means "If you have to ask, you can't afford it." Also, the goal here ISN'T to write more COBOL, so this may be a highly-addictive crutch to make the transition to C#.

Videos

Fujistu also has published a series of .NET for COBOL programmers on youtube. The intro video is located here and the first lesson is here, but anything by the user fujistucobol would be good.

Some container class holds list of objects of base class "Table" (like WoodenTable, MetalTable...). Each Table class keeps its MaterialType (MaterialType.Wood, MaterialType.Metal...). Question is how to provide proper getter method for container class that could return each SubClass of Table.

So far I've found following ways:

1.Getter with material type as parameter. Danger here is ClassCastException if type of T doesn't correspond to materialType:

<T extends Table> T getTable(MaterialType materialtype)
WoodenTable table = getTable(MaterialType.Wood);
MetalTable table = getTable(MaterialType.Wood); // ups... exception

2.Getter with Class parameter. Safe but not so clear for user (comparing to MaterialType as parameter):

<T extends Table> T getTable(Class<T> tableClass)
WoodenTable table = getTable(WoodenTable.class);

3.Getter for each Table SubClass. Cumbersome to use,write and add new Table subClasses:

WoodenTable getWoodenTable()
WoodenTable table = getWoodenTable();

4.Getter for just Table interface. Cast done outside of container class if necessary.

Table getTable(MaterialType materialType)
WoodenTable woodenTable = (WoodenTable)getTable(MaterialType.Wood) 

Is any other (better) way to do that? If not, then which of those would be most appriopriate or least smelly?

I would recommend to stop thinking about tables as about data structures with attributes (material), and start treating them as "persons" (Object Thinking). Don't get a "material" out of them. Instead, let them expose their behavior.

When you change the design of the table you will automatically change the design of their container. And it will become obvious that the container shouldn't care about tables' materials, but should let them control whether they want to get out of container or remain there.

I'm in the process of integrating a pseudo-search capability in my app. I have a search widget that gives out a list of search hints (these hints come from a fts3 sqlite table). When a user clicks on a search hint, a corresponding sqlite table will populate a listView.

I need a way to identify what table will populate the list based on the selected search hint. I'm thinking of doing something like this:

switch(search_hint){
    case(search_hint_1):  useTable(table_1);
                          break;
    case(search_hint_2):  useTable(table_2);
                          break;
    case(search_hint_3):  useTable(table_1 + table_2); // Case when I need to use
                          break;                       // two tables for ListView
}

I'm sure this is a possible solution but what if there are several (hundreds or thousands) of cases? Can anyone suggest a better way to face this?

switch statement is an anti-pattern in object oriented design (see Object Thinking). You should use inheritance instead.