This page may be out of date. Submit any pending changes before refreshing this page.
Hide this message.
Quora uses cookies to improve your experience. Read more
100+ Answers
Sagun Shrestha
Sagun Shrestha, Python, C, C++ programmer
In c programming language :-
 
#include<stdio.h>
void myfunc()
{
     printf("Smile!");
}

int main()
{
     int i,j;
     for(i=0;i<3;i++)
     {
          for(j=i;j<3;j++)
          {
                myfunc();
          }
          printf("\n");
     }
      return 0;
}
Your feedback is private.
Is this answer still relevant and up to date?
I like the other solutions, but they all require an operating system which is a bit of pain to install for something as simple as this. 

I suggest:
    section .text
    global boot
    bits 16
    global sectors_to_load

boot:
    xor bp, bp
    mov ds, bp
    mov ss, bp
    mov bp, 0x7000
    mov sp, bp ; better set up a stack for the function call
    mov bx, smile ; load the address of our string
    call bios_print_string
    call bios_print_string
    call bios_print_string
    mov bx, new_line
    call bios_print_string
    call bios_print_string
    mov bx, new_line
    call bios_print_string
    jmp $ ; loop forever  ... our work is done.


bios_print_string:
    push ax
.loop
    mov ah, 0x0e
    mov al, [bx]
    cmp al, 0
    jz .end
    int 0x10
    inc bx
    jmp .loop
.end
    pop ax
    ret

smile:
    db 'Smile!', 0
new_line:
    db '\n', 0

Which, if you assemble it (use NASM) and write it to the first sector of a disk, should remove all that nasty OS overhead.

What . . . no COBOL!?!? Let’s fix that:

IDENTIFICATION DIVISION.
PROGRAM-ID. SMILES.

DATA DIVISION.
   WORKING-STORAGE SECTION.
   01 SMILE-CNT PIC 9(1) VALUE 0.
   01 SUB-CNT   PIC 9(1) VALUE 0.
   01 STOPCNT   PIC 9(1) VALUE 3.

PROCEDURE DIVISION.
    PERFORM Smile-Paragraph WITH TEST BEFORE UNTIL SMILE-CNT>=STOPCNT.
    STOP RUN.
    
    Smile-Paragraph.
    PERFORM SubSmile-Paragraph.
    DISPLAY SPACE.
    ADD 1 to SMILE-CNT.

SubSmile-Paragraph.
    MOVE STOPCNT TO SUB-CNT.
    SUBTRACT SMILE-CNT FROM SUB-CNT.
    PERFORM WriteSmile WITH TEST BEFORE UNTIL SUB-CNT<=0.

WriteSmile.
    DISPLAY 'Smile!' WITH NO ADVANCING.
    SUBTRACT 1 FROM SUB-CNT.
Peter Lewerin
Peter Lewerin, Started in 1979, tried many languages and environments

Another Tcl answer that follows the specification a little bit closer:

proc Smile! {} {
    puts -nonewline Smile!
}

proc run n {
    puts [join [foreach Smile! [lrepeat [if [set n] {set n} return] Smile!] Smile!]]
    tailcall run [incr n -1]
}

(Yeah, that’s a pretty roundabout way to do it.)

Anonymous
Anonymous
Easy! You can do it in one simple line of Python:

locals().update({'sys' : __import__('sys'), 'f' : lambda: "Smile!", 'hand_this_program_in_to_your_professor_I_dare_you' : lambda n: 0 if not n else sys.stdout.write(f()) or hand_this_program_in_to_your_professor_I_dare_you(n - 1), 'no_really_I_dare_you' : lambda n: 0 if not n else hand_this_program_in_to_your_professor_I_dare_you(n) or sys.stdout.write('\n') or no_really_I_dare_you(n - 1)}) or no_really_I_dare_you(3) or sys.stdout.flush()
Dmitriy Genzel
Dmitriy Genzel, PhD in CS

All the solutions I’ve seen so far are unsatisfactory in that they take no advantage of the advances in the hottest field in Computer Science: Machine Learning. But this is your lucky day, and courtesy of Google’s release of the TensorFlow Open Source Library for Machine Intelligence, I am able to remedy this problem, but providing a truly intelligent (artificially!) solution:

#!/usr/bin/env python3

import tensorflow as tf

sess = tf.InteractiveSession()
n = 3

# input will be a matrix containing elements 1 ... n^2
inp = tf.reshape(tf.linspace(1.0, n*n, n*n), [n, n])
# Generate a matrix with rows 1..n
rows = tf.matmul(tf.reshape(tf.range(1, n+1), [n, 1]), 
                 tf.ones([1, n], dtype=tf.int32))
# Generate a flipped triangular matrix
indicator = rows<=tf.reverse(tf.range(1, n+1), dims=[True])
# Desired output is a triangular float matrix with ones in 
# upper left corner
desired = tf.to_float(indicator)

# We want to learn the mapping from input to output
tf.set_random_seed(2)
# Output is predicted after multiplying the input by a matrix
# which we learn automatically.
weights = tf.Variable(tf.random_normal([n, n], stddev=1.0/n/n))
predicted = tf.matmul(inp, weights) 
# L2 loss
loss = tf.reduce_mean(tf.square(predicted - desired))
optimizer = tf.train.GradientDescentOptimizer(0.005)
train = optimizer.minimize(loss)

# Run machine learning for 1000 steps. You can verify that much 
# less than this isn't sufficient.
sess.run(tf.initialize_all_variables())
for step in range(1000):
    sess.run(train)

predicted_bool = tf.equal(tf.round(predicted), 1.0)
shape = tf.shape(predicted_bool)
result = tf.select(predicted_bool, tf.fill(shape, 'Smile!'), 
                   tf.fill(shape, ''))
print(result.eval())
# Result:
# [['Smile!' 'Smile!' 'Smile!']
# ['Smile!' 'Smile!' '']
# ['Smile!' '' '']]