1,842
Views
10
CrossRef citations to date
0
Altmetric
Original Articles

Art to Smart: An Automated Bharatanatyam Dance Choreography

, &

Abstract

The ancient Indian classical dance form BharataNatyam (BN) can stay alive and dynamic by allowing innovative, experimental ideas. These comprise of a sequence of possible and legitimate dance steps, and it is estimated that using the main body parts, namely head, neck, hands and legs, more than five lakh dance steps can be generated for a single beat. Thus, dance choreography becomes an intensive, creative, and intuitive process. A choreographer has to finalize appropriate dance steps from among millions of possibilities. Though it is not impossible, the human choreographer cannot explore, analyze and remember all these variations among steps because of the large number of available options.

Hence, we propose to develop an autoenumeration followed by autoclassification of significant BN dance steps that can be used in dance performance and choreography. The foremost and most challenging task is to have a computational model that represents different BN dance poses. The second task is to develop a genetic algorithm (GA)-driven automatic system that would provide choreographers a list of unexplored, novel dance steps to fit in a single beat. We designed Art to SMart as a system to model the dance art of BharataNatyam. This system generates dance poses. Furthermore, we have developed a stick figure generation module to help visualize the 30-attribute dance vector generated from the system. The results are evaluated using a mean opinion score measure.

INTRODUCTION

Most BharataNatyam (BN) practitioners learn by rote from a dance teacher. They follow the same choreography as taught in their dance schools, with few variations. Natyashastra (see the Section entitled “BharataNatyam Dance” for further information) clearly mentions that BN dance can be creative and innovative. This led us to try out the various unexplored possibilities of movement of every limb to achieve new choreography.

A simple Java program developed to automatically generate the movement combinations possible for a single beat shows that the choreography combinations are lakhs (one hundred thousand) in number for even a single beat; for further detail we refer the reader to a study by Jadhav and Sasikumar (Citation2010). Thus, we can realize that the problem is having a large search space for a choreographer, and although they have good expertise in their subject of dance, no human being can remember and analyze all of these movements simultaneously. Thus, because the generated sequences are exponential in nature, it becomes extremely difficult to enumerate them and classify them manually.

Furthermore, it must be noted that we are not merely working on an enumeration problem. We are striving to obtain dance mudra (positions) that take into account dance aesthetics to conform with the classical BN dance pertaining to Adavus (explained in “BharataNatyam Dance”). Hence, we propose a genetic algorithm (GA)-based automated system to identify unexplored dance combinations.

How to measure the “goodness” of a dance pose was itself a challenging task.

We propose a fitness measure that assists in selecting good dance poses and passes them to next generations to ultimately obtain results that are approved by well-known BN dance performers.

Our objective is to provide choreographers with a list of dance steps that are possible but yet unexplored. Our attempt has been to try not to deviate much from the original traditional style. As mentioned earlier, we have used GA to determine the appropriateness of the dance steps that must be filtered out from among lakhs of possibilities. Appropriateness is modeled by a fitness function, and we propose the fitness function that determines the distance of generated dance steps from what are supposed to be ideal dance steps in BN.

The remaining sections of this article are organized as follows. “BharataNatyam Dance” gives information about BN dance in detail. Various attempts at dance automation are presented in “Literature Review.” “Art to Smart” explains the system by elaborating on the computational model for BN dance representation, followed by “BN Dance Automation.” Next, “Experimental Results” are presented. Details of an automatic module we have developed to visualize the resulting dance vectors are presented in “Stick Figure Generation” and generated results are evaluated in the section that follows. Conclusions are put forth in the final section.

BHARATANATYAM DANCE

BharataNatyam is one of the most ancient of all the Indian classical dance styles. Although there is no formal notation available for the dance form, it is a highly formalized dance as described in ancient texts. This dance is known for its grace, purity, tenderness, and sculpturesque poses. BharataNatyam is based on the precepts of, among others, Bharata’s Natyashastra and Nandikeshwaras’s Abhinayadarpana, to mention the most well-known theorists of the dance.

Any dance form, can be conceptually decomposed into its constituent base involving specific positions of body parts and the norms for combining combinations are constrained by physical constraints, aesthetic constraints, preferential constraints, etc. Treatises like Bharatmuni’s Natyashastra and Nandikeshwara’s Abhinayadarpana codify such constraints and rules in a human experience and existing literature. BharataNatyam (BN) is an ancient Indian classical dance form, well documented in the scriptures and it has classification of elementary body movements like head, eye, neck, hands and leg movements. BN also classifies dance into Nritta, rhythmic/pure dance movements used for aesthetic purposes, and Nritya, representational dance that conveys a meaning. Our focus of research is for assisting only in the choreography of Nritta, and this article focuses on movements for only a single beat/count.

The choice of only Nritta for choreography is as follows: for any given song, it’s easy to choreograph the lyrics of the song because they have meaning, but the portion between two stanzas of a song mostly has music. A choreographer always finds it a tough task to choreograph this part. Either she/he can choose to choreograph this piece with some meaning conveyed (e.g., show the dancer waiting for her beloved or searching for him, etc.) or use some beautiful, intricate dance steps that have no meaning but will add elegance to the dance. Our work shall aid the choreographer in choosing the best dance steps for Nritta.

In BN dance, a set of ideal dance steps are already well documented and practiced. The following subsection describes these concepts.

Adavu: Ideal BN Dance Steps

“Adavus” in BN are the fundamental units used in Nritta wherein hands, feet, head, eyes, and other parts of the body move in a coordinated manner. To fulfil the conditions of “Anga shuddhi,” or purity of movement, the four lakshana (criteria) of an Adavu must be correctly performed. These are

  • Sthanaka: posture assumed at the beginning and end of the Adavus.

  • Nritta hasta: the hand movements used in the performance.

  • Chari: movement of the hands and feet.

  • Hastakshetra: the position of the hands throughout the performance.

Various classifications of adavus exist with names given to them such as Tattuadavu, Nattadavu, and so on. Each of these adavus is performed with many variations. These are considered as excellent moves (Majumdar and Dinesan Citation2012) for any given number of beats and can be considered to be ideal for comparison of movements. We have represented these dance steps using our dance step representation vector model in Jadhav, Joshi, and Pawar (Citation2012b). We have modeled all the combinations of the popularly practiced adavus according to our presentation model and have further used it in the GA module to obtain dance mudras that do not deviate too much from the ideal dance mudras.

LITERATURE REVIEW

Automating dance choreography and capturing the movements has been done previously. Dance representation can be done using existing notation. Ebenreuter (Citation2006) attempted to design an interface to facilitate the exact documentation of dance notation; and in her study, Karpen (Citation1990) tried to solve the problem of notation for BharataNatyam by using Labanotation. The LabanDancer system developed by Calvert et al. (Citation2005) helped to translate the recorded labanotation scores into 3D human figure animations. Choreographic process was enhanced by Nahrstedt et al. (Citation2007) using a 3D tele-immersive (3DTI) room surrounded by multiple 3D digital cameras and a dancer placed in a remote 3DTI room in a joint virtual space. Laban movement analysis (LMA) provides a model for observation, description, and notational system for human movements. Implementation of the same in a computer has been done through a Bayesian approach (Rett, Dias, and Ahuactzin Citation2008) and also by 3DTI (Nahrstedt et al. Citation2007). Choreographic language agent (CLA; DeLahunta Citation2009) helped to bridge the gap between the notations, sketches, diagrams, and text done by the choreographer on a notebook and his thinking process. Thus a unique method was used to augment the thinking process of the choreographer. Based on Newton’s Law, Hsieh and Luciani (Citation2005) generated a dynamic model according to dance verbs jump, flip, and the like.

Capturing and modeling of all the dance movements correctly results in efficient processing, too. Several attempts have been made by researchers in various ways to process these captured and modeled data. Some of them are evolutionary approaches using genetic algorithms (Nakazawa and Paezold-Ruehl Citation2009; Lapointe Citation2005; Hagendoorn Citation2002) and multiagent systems (Bechon and Slotine Citation2012; Dubbin and Stanley Citation2010), optimization algorithms, classification using neural networks (Qian et al. Citation2004) and support vector machines (Hagendoorn Citation2002), image processing for gesture recognition (Hariharan, Acharya, and Mitra Citation2011; Bradley Citation1998) and corpus-based systems (Barry et al. Citation2005).

Our system differs from the existing work because it is focused in the area of Indian Classical dance, BharataNatyam. This system generates novel steps through evolutionary programming and is a tool for the choreographer to enhance his/her skills.

ART TO SMART SYSTEM

The Art to SMart system has two major components: data modeling for BN dance and BN dance choreography automation, as shown in . The later module uses GA principles. Both modules are elaborated in following subsections.

FIGURE 1 Art to SMart system structure.

Adavu image: ©onlinebharatanatyam.com. Reproduced by permission of onlinebharatanatyam.com. Permission to reuse must be obtained from the rightsholder.
FIGURE 1 Art to SMart system structure.

Data Modeling for BN Dance

In this subsection we describe our proposed model to represent any BN dance step. We have modeled an ancient Indian classical dance, BN, and named it “Art to SMart” system, with “Smart” indicating “System Modeled art.” Any BN dance position is a manifestation of one of the possible combinations among legitimate poses of various body parts. Hence, in order to model different BN dance positions, we must represent the exact positions of the six major limbs of the body (head, hands, waist, and legs) at each step. We identified a different set of attributes to model the different body part positions.

Hence, a dance position vector (DP vector) can be defined as a combination of all these attributes that in total represent a single BN dance position at the end of every beat. We have eight attributes to model each hand position and five for each leg position. In addition, we have two attributes each to model head and waist movement. Thus, the DP vector is a 30-attribute vector that corresponds to a dance position. Eye movements are not considered because they are very fine movements used to express feelings such as anger, happiness, and so on. Neck movements are also not considered because these finer movements can be modeled with the head itself. Hence, we restricted ourselves to model all possible combinations of the poses constituted from the body parts of head, waist, hand (single-hand as well as double-hand mudras, per BN) and leg. Modeling of hand and leg movements is complex because they are combinations of positions of their different subparts. For example, a hand position is constituted by the position of hand, elbow, wrist, palm, and the positions of fingers that represent one of the hand mudras. Similarly, for a leg, the position and movement along waist, knee, and ankle must be modeled. Body part positions can be visualized and modeled using the orientation of a body part or its subpart from specified x, y, and z axes (Nakazawa et al. Citation2002).

A little elaboration for x, y, and z axes related attributes is necessary. For hand movements, the x axis represents right-to-left movements of the hand. The negative value of the x axis is considered only up to −60 degrees because the hand cannot be moved more than 60 degrees horizontally in the opposite direction. For the right hand, all the movements on the right side are assigned positive values, and for movements on left, negative values are assigned. A similar concept was used for the left hand. The y axis for hands represents the upward and the downward movements. Upward movements are considered as positive and downward are negative. The z axis represents front and back movements of the hands. The elbow movement is from a straight hand to a completely folded hand, and, accordingly, values are assigned to this attribute. The wrist attribute is assigned the values as 1, −1, and 0 for palm facing down, palm facing up, and normal position, respectively. The palm bending toward the finger is 1, the opposite is −1, and no bending is 0. For the shoulder jerk, pulling in is 1, pushing out is −1, and normal position is 0.

For leg positions, too, we have to separately consider movement of thigh from waist, knee, and ankle positions. For a leg, the x axis corresponds to a horizontal leg movement. We assigned positive values for right-side movement and negative values for left-side movement. The front and back direction movements are tracked using the z axis, and the y axis checks if the leg is touching or off the ground. A knee can be either bent completely in full sitting position (Mandi) or half-sitting position (Ardhmandala) or standing straight with no bend in the knee (Sthanaka). All these angles are specified for the x, y, z axes of hands and legs and are in strict adherence to BN norms. The same is true also for the knee positions aforementioned. According to the knee positions, values to the attribute are determined.

For head attributes, values are assigned on the basis of direction of head from downward to upward; whereas, for waist position, we have to model twist and bend separately, using two attributes. Right or left twist from waist is positive and negative, respectively. Bending from waist down fully and half-way is considered, and a slight bending at the back is −30, which is also modeled. For further details of the entire codification process, please refer to Jadhav, Joshi, and Pawar (Citation2012b).

We represent the dance model as follows.

D = < Lhd, Lrh, Llh, Lw, Lrl, Lll >, where Li represents a set of attributes that correspond to the exact position of a limb in that dance step. There are two attributes each for head and waist, five attributes each for left and right legs, and eight attributes each for left and right hand.

Any dance step attribute can be referred to as Di.ak or Li.ak, where i refers to a dance step or one of the limbs, and k refers to any one of the 30-dance vector attributes corresponding to a certain limb. So D1.a5 refers to an attribute that corresponds to the right hand. The same attribute can also be referred to as Lrh.a3.

The dance step of is represented using the dance vector as shown in .

TABLE 1 Dance Vector Description

FIGURE 2 A sample BharataNatyam dance step.

©onlinebharatanatyam.com. Reproduced by permission of onlinebharatanatyam.com. Permission to reuse must be obtained from the rightsholder.
FIGURE 2 A sample BharataNatyam dance step.

BN DANCE AUTOMATION

We used GA to determine appropriate BN dance steps. A short description of GA followed by its actual implementation is explained in this subsection. A genetic algorithm is a search process that follows the principles of evolution through natural selection. The domain knowledge is represented using a candidate solution called an organism. Typically, an organism is a single genome represented as a vector of length n, where gi is called a gene.

A group of organisms is called a population. Successive populations are called generations. A generational GA starts from initial generation G (0), and for each generation G (t), a new generation, G (t + 1), is generated using genetic operators such as mutation and crossover. The evaluation process of a genome, in other words, evaluating G (t), is a combination of two steps. describes the evolutionary procedure. The first step filters out nonstandard dance combinations generated through the crossover operation. Second, the fitness of the genome is determined. The intuitive distance measure is used to decide the fitness of the genome. Determination of a fitness function is the most critical and important task in the development process of any GA-based system. The objective of the fitness function of the proposed system is to carry forward appropriate dance steps from a set of enumerated dance steps of a GA generation to the next generation. We wish to generate dance steps that are not too far or too close (identical) to the adavus (Jadhav, Joshi, and Pawar Citation2012a). In short, the proposed fitness function will be based on distance from adavus. Hence, we propose a fitness function that involves two parameters: limb variation count (LVC) and absolute vector distance (AVD), to determine the distance from ideal dance steps. More details of the parameters used are as follows.

  • LVC: It gives the counts of body parts that are distinct in two dance vectors (six major limbs of the body are represented by a dance vector). For example, if we have a change only in hand position between two chromosomes and the rest of the limb positions are exactly the same, then the limb variation count would be 1, and so on. The higher the value of LVC is, the further apart the two dance vectors are from each other.

    FIGURE 3 Genetic algorithm used for generating choreography.

    FIGURE 3 Genetic algorithm used for generating choreography.

  • AVD: This parameter gives the absolute distance between two dance vectors. It is a cumulative sum of differences between the corresponding values of the 30 attributes of two dance vectors. The distance d is a function of AVD and LVC.

    We have given higher weightage (0.75) to AVD over LVC (0.25) because the more the vector difference, the more is the variation and novelty to the new dance step, and the lesser the limb variation, the lesser will be the deviation from the ideal dance step. Expected behavior of our proposed fitness function is a bell-shaped curve, which is a normal distribution curve shown in .

    FIGURE 4 Graph of fitness function.

    FIGURE 4 Graph of fitness function.

  • Fitness function value (ffv) is given by ffv = f (d) = ND (d), where ND gives normal distribution of the distance. The normal distribution ensures that higher fitness value shall be assigned to the dance vectors that are not too close or not too far from the ideal vectors. More details can be obtained from Jadhav, Joshi, and Pawar (Citation2012a).

EXPERIMENTAL RESULTS

Generating appropriate dance steps was a challenging task. We faced the following two main challenges:

  • To avoid impracticable (not doable) as well as impractical (not practiced) dance steps, and

  • To generate steps that had surprise value or novelty.

We overcame the first challenge by filtering out impractical and impracticable steps by maintaining a database of infeasible dance steps. Furthermore, by developing an explicit fitness function that was based on the distance of generated steps from the adavus, we overcame the second challenge.

We have requested an expert BharataNatyam exponent to pose according to the results obtained from our GA-driven system for the 30-attribute dance vector, as shown in . It’s a painstaking task to remember all the numeric codes assigned for each limb and the corresponding positions in the x, y, and z axes. Thus, in order to overcome this visualization problem, we have automated the process of stick figure generation from the 30-attribute vector generated by our Art to SMart system. The details of the stick figure generation are presented in the next section.

FIGURE 5 Representation of one of the GA-generated dance mudra.

FIGURE 5 Representation of one of the GA-generated dance mudra.

STICK FIGURE GENERATION

Stick figures are used to model the network of body segments and joints that articulate the body. A stick figure gives an abstract idea of limb and joints of the body for any given pose of the body. Stick figures are used to represent dance along with notations. It helps in visualizing the dance steps. Pattanaik (Citation1989) illustrated how stick figures can be used to obtain an animated model of BN dance. Prior to that, Calvert and Chapman (Citation2006) and Savage and Officer (Citation1978) also discussed stick figures to display human movements. Even the traditional BN dance gurus insist on stick figure diagrams for recording the dance steps. One of such recorded dance steps for an “Adavu” is shown below in .

FIGURE 6 Hand-drawn stick figure: Traditional approach for memorizing dance.

FIGURE 6 Hand-drawn stick figure: Traditional approach for memorizing dance.

The program is designed using open source software Octave version 3.0. It directly takes input in the form of the 30-attribute vector. Given the aforementioned two inputs, the system shows the resulting 2D stick figure of the corresponding dance position vector. The system generates results with up to 80% accuracy. Refer to and . The reason for not achieving complete accuracy lies in the limitations of the stick figure model, which does not show depth and causes confusion in the perception of animated movements. However, it works well in cases of 2D poses, which are in front view, but any twists, bends, overlapping of limbs, and so forth are not accurately shown. Samples of the stick figure results generated by the system are shown, following, in comparison to the actual pictures.

FIGURE 7 An adavu pose and corresponding stick figure.

Adavu image: ©onlinebharatanatyam.com. Reproduced by permission of onlinebharatanatyam.com. Permission to reuse must be obtained from the rightsholder.
FIGURE 7 An adavu pose and corresponding stick figure.

FIGURE 8 A system-generated pose and corresponding stick figure.

FIGURE 8 A system-generated pose and corresponding stick figure.

Implementation Details

The codes have been written with the Octave language, which is an interpretable language with extensive capabilities for data visualisation. The plots were made with GNU plots relative to the underlying scripts on which they are building. Different modules have been written for each of the limbs, the head, and the torso, which are triggered by a main script. With the given 30-attribute dance vector, an offset is calculated relative to a standard figure. Each of the modules is then called, which plots the corresponding part and, thus, the entire stick figure is generated.

Issues Faced

The projection of 3D poses into 2D was an interesting task. In the process, we realized that the use of 3D poses would have some inherent difficulties. Because one dimension is lost in the process of projection, we had to take special measures to avoid losing the conveyed sense of information. The first issue posed was by the distinction of the two hands.

Because in the stick figure the joint of the hands projects to a point, and because either hand can move in both the directions, differentiation of left and right hand was not possible without referring to the 30-bit vector. This issue was resolved by bi-furcating the joint of two hands and giving each arm a different point of origin. This development took care of separating the two limbs without affecting the aesthetics of the figure. The second and the most important issue faced was the projection of inline figures. We could correctly project the x and y axes but the z axis posed a problem. For example, the projection for a hand taken in front (along the positive z axis) and a hand taken behind (along the negative z axis) were the same in the 2D figures. We tried to overcome this problem by adding of other parameters to the figure such as changing colors, appending vector values, and other means.

However, this didn’t work well with the aesthetic visualization of the figures, and, hence, we resorted to displaying them unchanged with a comment on the picture itself and leaving the interpretation to the reader.

EVALUATING RESULT

We obtained various results from the GA-driven system for a single beat. These results were in the form of a vector, and to analyze and interpret the same, we obtained corresponding stick figures for each of these vectors. Later on these vectors were explained, with the help of the stick figure, to our dance expert who posed for them and pictures were clicked accordingly. These results were shown to various dance experts who were requested to complete a tabular format questionnaire with ratings ranging from 1 (Worst) to 5 (Excellent). This evaluation was utilized in the form of mean opinion score (MOS).

MOS is a technique to obtain subjective feedback about image quality from experts. We consulted several domain experts to rank the results generated by our system. The images are ranked on a scale of 5 as follows: 1: Not acceptable, 2: Bad, 3: Ok, 4: Good, 5: Excellent.

shows the experts’ opinions. Five columns titled Expert-1 to Expert-5 correspond to scores of individual experts. Our results show that 13 of the images are rated as “Ok,” 5 as “Good,” and 7 as “Bad” out of the 25 images given to 5 different experts of BharataNatyam from Pune and Goa. A sample out of these 25 images is presented in . None of the expert rankings through MOS has been rated as Not Acceptable for these 25 images.

TABLE 2 A Sample of MOS Data

CONCLUSION

This research is a novel attempt in the field of automating Indian classical dance, BN. We have successfully implemented the same for a single beat. Thus, we have been able to successfully model the major limbs of the body to represent the dancer’s final position at the end of a beat. This dance position vector consists of 30 attributes featuring six major limbs of the body: head, right hand, left hand, waist, left leg, and right leg.

The GA system generated output can be used as a suggestion or a tool by the choreographer so that he/she can use it advantageously instead of taking the exact replica generated by the system. As validated by the experts of BN, the system is able to suggest some unique choreography every time and the majority of these results are acceptable to every dancer of different age groups. The hand, leg, or even the unique head positions suggested by the system for a single beat will allow more room for unique choreography and creativity.

ACKNOWLEDGMENTS

Our sincere thanks to Dr. M. SasiKumar, Director (R&D), CDAC, Pune, for his valuable guidance. Thanks to Mr. Shubhen Pal, Project Fellow (April 2011–April 2012) and Miss Namrata Dangui, Project Fellow (June 2012–May 2013). Sincere thanks to Ms. Anwaya Aras, Final Year Computer Science student, BITS Pilani, K. K. Birla Campus, Goa, for developing the stick figure generation module. We acknowledge the efforts of our model Ms. Sapna Naik, BN lecturer, Kala Academy, Panaji, Goa, for and , and Ms. Anjali Nandan, USA, from onlinebharatanatyam.com for , , and .

FUNDING

This work has been supported by UGC under Major Research Project No. 901/2010(SR).

Additional information

Funding

This work has been supported by UGC under Major Research Project No. 901/2010(SR).

REFERENCES

  • Barry, M., J. Gutknecht, I., Kulka, P., Lukowicz, and T. Stricker. 2005. From motion to emotion: A wearable system for the multimedia enrichment of a Butoh dance performance. Journal of Mobile Multimedia 1(2):112–132.
  • Bechon, P., and J.-J. Slotine. 2012. Synchronization and quorum sensing in a swarm of humanoid robots. arXiv:1205.2952.
  • Bradley, J. M. S. E. 1998. Learning the grammar of dance. In Proceedings of the international conference on machine learning (ICML), 547–555. CA, USA: Morgan Kaufmann.
  • Calvert, T., and J. Chapman. 2006. Aspects of the kinematic simulation of human movement. IEEE Computer Graphics and Applications 2(9):41–50.
  • Calvert, T., L. Wilke, R. Ryman, and I. Fox. 2005. Applications of computers to dance. IEEE Computer Graphics and Applications 25(2):6–12.
  • DeLahunta, S. 2009. The future of choreographic practice: The choreographic language agent. Paper presented at the World Dance Alliance Conference, Brisbane, Australia, July 14–18.
  • Dubbin, G. A., and K. O. Stanley. 2010. Learning to dance through interactive evolution. In Proceedings of the 2010 international conference on applications of evolutionary computation - volume part II, 331–340. Berlin, Heidelberg: Springer-Verlag.
  • Ebenreuter, N. 2006. Transference of dance knowledge through interface design. In CHI ’06 extended abstracts on human factors in computing systems, 1739–1742. New York, NY: ACM.
  • Hagendoorn, I. 2002. Emergent patterns in dance improvisation and choreography. In Proceedings of the international conference on complex systems. InterJournal.
  • Hariharan, D., T. Acharya, and S. Mitra. 2011. Recognizing hand gestures of a dancer. In Proceedings of the 4th international conference on pattern recognition and machine intelligence, 186–192. Berlin, Heidelberg: Springer-Verlag (accessed from http://dl.acm.org/citation.cfm?id=2026851.2026887).
  • Hsieh, C.-M., and A. Luciani. 2005. Generating dance verbs and assisting computer choreography. In Proceedings of the 13th annual ACM international conference on multimedia, 774–782. New York, NY: ACM.
  • Jadhav, S., M. Joshi, and J. Pawar. 2012a. Art to smart: An evolutionary computational model for bharatanatyam choreography. In Proceedings of the 12th international conference on hybrid information systems, 384–389. Pune, Maharashtra, India: IEEE Xplore.
  • Jadhav, S., M. Joshi, and J. Pawar. 2012b. Modeling bharatanatyam dance steps: Art to SMart. In CUBE 2012, 320–325. Pune, Maharashtra, India: ACM DL.
  • Jadhav, S., and M. Sasikumar. 2010. A computational model for bharata natyam choreography. International Journal of Computer Science and Information Security 8(7):231–233.
  • Karpen, A. P. 1990. Labanotation for Indian dance, in particular bharata natyam. Paper presented at the 11th European Conference on Modern South Asian Studies, Amsterdam.
  • Lapointe, F.-J. 2005. Choreogenetics: The generation of choreographic variants through genetic mutations and selection. In Proceedings of the 2005 workshops on genetic and evolutionary computation, 366–369. Washington, D.C.: ACM.
  • Majumdar, R., and P. Dinesan. 2012. Framework for teaching bharatanatyam through digital medium. In 2012 IEEE fourth international conference on technology for education, 241–242. Hyderabad: IEEE Computer Society.
  • Nahrstedt, K., R. Bajcsy, L. Wymore, R. Sheppard, and K. Mezur. 2007. Computational model of human creativity in dance choreography. Urbana 51:61801.
  • Nakazawa, A., A. N. Shinchiro, S. Kudoh, and K. Ikeuchi. 2002. Digital archive of human dance motions. In Proceedings of the international conference on virtual systems and multimedia (VSMM2002), 180–188. Gyeongju: VSMM.
  • Nakazawa, M., and A. Paezold-Ruehl. 2009. Dancing, dance and choreography: an intelligent nondeterministic generator. In The fifth Richard Tapia celebration of diversity in computing conference: intellect, initiatives, insight, and innovations, 30–34. New York, NY: ACM.
  • Pattanaik, S. N. 1989. A stylised model for animating bharata natyam, an Indian classical dance form. Berlin, Heidelberg: Springer-Verlag.
  • Qian, G., F. Guo, T. Ingalls, L. Olson, J. James, and T. Rikakis. 2004. A gesture-driven multimodal interactive dance system. In Proceedings of IEEE international conference on multimedia and expo, 27–30. IEEE Computer Society.
  • Rett, J., J. Dias, and J. M. Ahuactzin. 2008. Bayesian reasoning for laban movement analysis used in human-machine interaction. International Journal on Reasoning Based Systems 1:64–74.
  • Savage, G. J., and J. M. Officer. 1978. Choreo: An interactive computer model for dance. International Journal for Man-Machine Studies 10:1–8.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.